00:00:00.001 Started by upstream project "autotest-per-patch" build number 132417 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.124 Using shallow fetch with depth 1 00:00:00.124 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.124 > git --version # timeout=10 00:00:00.162 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.789 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.800 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.812 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.812 > git config core.sparsecheckout # timeout=10 00:00:04.823 > git read-tree -mu HEAD # timeout=10 00:00:04.838 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.856 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.856 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.943 [Pipeline] Start of Pipeline 00:00:04.957 [Pipeline] library 00:00:04.959 Loading library shm_lib@master 00:00:04.959 Library shm_lib@master is cached. Copying from home. 00:00:04.978 [Pipeline] node 00:00:04.985 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.988 [Pipeline] { 00:00:04.998 [Pipeline] catchError 00:00:05.000 [Pipeline] { 00:00:05.012 [Pipeline] wrap 00:00:05.021 [Pipeline] { 00:00:05.029 [Pipeline] stage 00:00:05.031 [Pipeline] { (Prologue) 00:00:05.232 [Pipeline] sh 00:00:05.518 + logger -p user.info -t JENKINS-CI 00:00:05.538 [Pipeline] echo 00:00:05.540 Node: WFP6 00:00:05.545 [Pipeline] sh 00:00:05.842 [Pipeline] setCustomBuildProperty 00:00:05.853 [Pipeline] echo 00:00:05.855 Cleanup processes 00:00:05.860 [Pipeline] sh 00:00:06.142 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.142 2217252 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.153 [Pipeline] sh 00:00:06.432 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.432 ++ grep -v 'sudo pgrep' 00:00:06.432 ++ awk '{print $1}' 00:00:06.432 + sudo kill -9 00:00:06.432 + true 00:00:06.445 [Pipeline] cleanWs 00:00:06.454 [WS-CLEANUP] Deleting project workspace... 00:00:06.454 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.460 [WS-CLEANUP] done 00:00:06.464 [Pipeline] setCustomBuildProperty 00:00:06.477 [Pipeline] sh 00:00:06.753 + sudo git config --global --replace-all safe.directory '*' 00:00:06.848 [Pipeline] httpRequest 00:00:07.484 [Pipeline] echo 00:00:07.486 Sorcerer 10.211.164.20 is alive 00:00:07.495 [Pipeline] retry 00:00:07.497 [Pipeline] { 00:00:07.511 [Pipeline] httpRequest 00:00:07.516 HttpMethod: GET 00:00:07.516 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.517 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.532 Response Code: HTTP/1.1 200 OK 00:00:07.532 Success: Status code 200 is in the accepted range: 200,404 00:00:07.532 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.166 [Pipeline] } 00:00:10.185 [Pipeline] // retry 00:00:10.192 [Pipeline] sh 00:00:10.474 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.489 [Pipeline] httpRequest 00:00:11.098 [Pipeline] echo 00:00:11.100 Sorcerer 10.211.164.20 is alive 00:00:11.111 [Pipeline] retry 00:00:11.114 [Pipeline] { 00:00:11.133 [Pipeline] httpRequest 00:00:11.138 HttpMethod: GET 00:00:11.138 URL: http://10.211.164.20/packages/spdk_0b4b4be7eaf3b6bf1376570a91067cb2c2dfab86.tar.gz 00:00:11.139 Sending request to url: http://10.211.164.20/packages/spdk_0b4b4be7eaf3b6bf1376570a91067cb2c2dfab86.tar.gz 00:00:11.154 Response Code: HTTP/1.1 200 OK 00:00:11.155 Success: Status code 200 is in the accepted range: 200,404 00:00:11.155 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0b4b4be7eaf3b6bf1376570a91067cb2c2dfab86.tar.gz 00:01:11.371 [Pipeline] } 00:01:11.383 [Pipeline] // retry 00:01:11.390 [Pipeline] sh 00:01:11.673 + tar --no-same-owner -xf spdk_0b4b4be7eaf3b6bf1376570a91067cb2c2dfab86.tar.gz 00:01:14.220 [Pipeline] sh 00:01:14.505 + git -C spdk log --oneline -n5 00:01:14.505 0b4b4be7e bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:01:14.505 5200caf0b bdev/malloc: Extract internal of verify_pi() for code reuse 00:01:14.505 82b85d9ca bdev/malloc: malloc_done() uses switch-case for clean up 00:01:14.505 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:01:14.505 349af566b nvmf: Get metadata config by not bdev but bdev_desc 00:01:14.516 [Pipeline] } 00:01:14.530 [Pipeline] // stage 00:01:14.540 [Pipeline] stage 00:01:14.542 [Pipeline] { (Prepare) 00:01:14.559 [Pipeline] writeFile 00:01:14.575 [Pipeline] sh 00:01:14.859 + logger -p user.info -t JENKINS-CI 00:01:14.872 [Pipeline] sh 00:01:15.158 + logger -p user.info -t JENKINS-CI 00:01:15.170 [Pipeline] sh 00:01:15.453 + cat autorun-spdk.conf 00:01:15.453 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.453 SPDK_TEST_NVMF=1 00:01:15.453 SPDK_TEST_NVME_CLI=1 00:01:15.453 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.453 SPDK_TEST_NVMF_NICS=e810 00:01:15.453 SPDK_TEST_VFIOUSER=1 00:01:15.453 SPDK_RUN_UBSAN=1 00:01:15.453 NET_TYPE=phy 00:01:15.460 RUN_NIGHTLY=0 00:01:15.465 [Pipeline] readFile 00:01:15.498 [Pipeline] withEnv 00:01:15.501 [Pipeline] { 00:01:15.516 [Pipeline] sh 00:01:15.799 + set -ex 00:01:15.799 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:15.799 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.799 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.799 ++ SPDK_TEST_NVMF=1 00:01:15.799 ++ SPDK_TEST_NVME_CLI=1 00:01:15.799 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.799 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.799 ++ SPDK_TEST_VFIOUSER=1 00:01:15.799 ++ SPDK_RUN_UBSAN=1 00:01:15.799 ++ NET_TYPE=phy 00:01:15.799 ++ RUN_NIGHTLY=0 00:01:15.799 + case $SPDK_TEST_NVMF_NICS in 00:01:15.799 + DRIVERS=ice 00:01:15.799 + [[ tcp == \r\d\m\a ]] 00:01:15.799 + [[ -n ice ]] 00:01:15.799 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.799 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.799 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.799 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.799 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.799 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.799 + true 00:01:15.799 + for D in $DRIVERS 00:01:15.799 + sudo modprobe ice 00:01:15.799 + exit 0 00:01:15.808 [Pipeline] } 00:01:15.822 [Pipeline] // withEnv 00:01:15.827 [Pipeline] } 00:01:15.842 [Pipeline] // stage 00:01:15.851 [Pipeline] catchError 00:01:15.853 [Pipeline] { 00:01:15.866 [Pipeline] timeout 00:01:15.867 Timeout set to expire in 1 hr 0 min 00:01:15.869 [Pipeline] { 00:01:15.884 [Pipeline] stage 00:01:15.886 [Pipeline] { (Tests) 00:01:15.901 [Pipeline] sh 00:01:16.184 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.184 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.184 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:16.184 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.184 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:16.184 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.184 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:16.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:16.184 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:16.184 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:16.184 + source /etc/os-release 00:01:16.184 ++ NAME='Fedora Linux' 00:01:16.184 ++ VERSION='39 (Cloud Edition)' 00:01:16.184 ++ ID=fedora 00:01:16.184 ++ VERSION_ID=39 00:01:16.184 ++ VERSION_CODENAME= 00:01:16.184 ++ PLATFORM_ID=platform:f39 00:01:16.184 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:16.184 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:16.184 ++ LOGO=fedora-logo-icon 00:01:16.184 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:16.184 ++ HOME_URL=https://fedoraproject.org/ 00:01:16.184 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:16.184 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:16.184 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:16.184 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:16.184 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:16.184 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:16.184 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:16.184 ++ SUPPORT_END=2024-11-12 00:01:16.184 ++ VARIANT='Cloud Edition' 00:01:16.184 ++ VARIANT_ID=cloud 00:01:16.184 + uname -a 00:01:16.184 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:16.184 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.718 Hugepages 00:01:18.718 node hugesize free / total 00:01:18.718 node0 1048576kB 0 / 0 00:01:18.718 node0 2048kB 0 / 0 00:01:18.718 node1 1048576kB 0 / 0 00:01:18.718 node1 2048kB 0 / 0 00:01:18.718 00:01:18.718 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.718 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:18.718 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:18.718 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:18.718 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:18.718 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:18.718 + rm -f /tmp/spdk-ld-path 00:01:18.718 + source autorun-spdk.conf 00:01:18.718 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.718 ++ SPDK_TEST_NVMF=1 00:01:18.718 ++ SPDK_TEST_NVME_CLI=1 00:01:18.718 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.718 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.718 ++ SPDK_TEST_VFIOUSER=1 00:01:18.718 ++ SPDK_RUN_UBSAN=1 00:01:18.718 ++ NET_TYPE=phy 00:01:18.718 ++ RUN_NIGHTLY=0 00:01:18.718 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.718 + [[ -n '' ]] 00:01:18.718 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.719 + for M in /var/spdk/build-*-manifest.txt 00:01:18.719 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.719 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.719 + for M in /var/spdk/build-*-manifest.txt 00:01:18.719 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.719 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.719 + for M in /var/spdk/build-*-manifest.txt 00:01:18.719 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.719 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.719 ++ uname 00:01:18.719 + [[ Linux == \L\i\n\u\x ]] 00:01:18.719 + sudo dmesg -T 00:01:18.978 + sudo dmesg --clear 00:01:18.978 + dmesg_pid=2218176 00:01:18.978 + [[ Fedora Linux == FreeBSD ]] 00:01:18.978 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.978 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.978 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.978 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.978 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.978 + sudo dmesg -Tw 00:01:18.978 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.978 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.978 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.978 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.978 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.978 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.978 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.978 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.978 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.978 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.978 16:55:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:18.978 16:55:36 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:18.978 16:55:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:18.978 16:55:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:18.978 16:55:36 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.978 16:55:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:18.978 16:55:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.978 16:55:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:18.978 16:55:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.978 16:55:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.978 16:55:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.978 16:55:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.978 16:55:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.978 16:55:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.979 16:55:36 -- paths/export.sh@5 -- $ export PATH 00:01:18.979 16:55:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.979 16:55:36 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.979 16:55:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:18.979 16:55:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732118136.XXXXXX 00:01:18.979 16:55:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732118136.nVtBvv 00:01:18.979 16:55:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:18.979 16:55:36 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:18.979 16:55:36 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:18.979 16:55:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.979 16:55:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.979 16:55:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:18.979 16:55:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:18.979 16:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.979 16:55:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:18.979 16:55:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:18.979 16:55:37 -- pm/common@17 -- $ local monitor 00:01:18.979 16:55:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.979 16:55:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.979 16:55:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.979 16:55:37 -- pm/common@21 -- $ date +%s 00:01:18.979 16:55:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.979 16:55:37 -- pm/common@21 -- $ date +%s 00:01:18.979 16:55:37 -- pm/common@25 -- $ sleep 1 00:01:18.979 16:55:37 -- pm/common@21 -- $ date +%s 00:01:18.979 16:55:37 -- pm/common@21 -- $ date +%s 00:01:18.979 16:55:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732118137 00:01:18.979 16:55:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732118137 00:01:18.979 16:55:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732118137 00:01:18.979 16:55:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732118137 00:01:19.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732118137_collect-cpu-load.pm.log 00:01:19.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732118137_collect-vmstat.pm.log 00:01:19.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732118137_collect-cpu-temp.pm.log 00:01:19.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732118137_collect-bmc-pm.bmc.pm.log 00:01:20.172 16:55:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.172 16:55:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.172 16:55:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.172 16:55:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.172 16:55:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.172 Wed Nov 20 03:55:38 PM UTC 2024 00:01:20.172 16:55:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.172 v25.01-pre-244-g0b4b4be7e 00:01:20.172 16:55:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.172 16:55:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.172 16:55:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.172 16:55:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.172 16:55:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.172 16:55:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.172 ************************************ 00:01:20.172 START TEST ubsan 00:01:20.172 ************************************ 00:01:20.172 16:55:38 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.172 using ubsan 00:01:20.172 00:01:20.172 real 0m0.000s 00:01:20.172 user 0m0.000s 00:01:20.172 sys 0m0.000s 00:01:20.172 16:55:38 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.172 16:55:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.172 ************************************ 00:01:20.172 END TEST ubsan 00:01:20.172 ************************************ 00:01:20.172 16:55:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.172 16:55:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.172 16:55:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.172 16:55:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.430 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.430 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:20.688 Using 'verbs' RDMA provider 00:01:33.883 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.092 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.092 Creating mk/config.mk...done. 00:01:46.092 Creating mk/cc.flags.mk...done. 00:01:46.092 Type 'make' to build. 00:01:46.092 16:56:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:46.092 16:56:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:46.092 16:56:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.092 16:56:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.092 ************************************ 00:01:46.092 START TEST make 00:01:46.092 ************************************ 00:01:46.092 16:56:03 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:46.092 make[1]: Nothing to be done for 'all'. 00:01:47.031 The Meson build system 00:01:47.031 Version: 1.5.0 00:01:47.031 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:47.031 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.031 Build type: native build 00:01:47.031 Project name: libvfio-user 00:01:47.031 Project version: 0.0.1 00:01:47.031 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:47.031 C linker for the host machine: cc ld.bfd 2.40-14 00:01:47.031 Host machine cpu family: x86_64 00:01:47.031 Host machine cpu: x86_64 00:01:47.031 Run-time dependency threads found: YES 00:01:47.031 Library dl found: YES 00:01:47.031 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.031 Run-time dependency json-c found: YES 0.17 00:01:47.031 Run-time dependency cmocka found: YES 1.1.7 00:01:47.031 Program pytest-3 found: NO 00:01:47.031 Program flake8 found: NO 00:01:47.031 Program misspell-fixer found: NO 00:01:47.031 Program restructuredtext-lint found: NO 00:01:47.031 Program valgrind found: YES (/usr/bin/valgrind) 00:01:47.031 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.031 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.031 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.031 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.031 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:47.031 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:47.031 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.031 Build targets in project: 8 00:01:47.031 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:47.031 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:47.032 00:01:47.032 libvfio-user 0.0.1 00:01:47.032 00:01:47.032 User defined options 00:01:47.032 buildtype : debug 00:01:47.032 default_library: shared 00:01:47.032 libdir : /usr/local/lib 00:01:47.032 00:01:47.032 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.965 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.965 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.965 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.965 [3/37] Compiling C object samples/null.p/null.c.o 00:01:47.965 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:47.965 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.965 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:47.965 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:47.965 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:47.965 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.965 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.965 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:47.965 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:47.965 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:47.965 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.965 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:47.965 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.965 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.965 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.965 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.965 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.965 [21/37] Compiling C object samples/server.p/server.c.o 00:01:47.965 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.965 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:47.965 [24/37] Compiling C object samples/client.p/client.c.o 00:01:47.965 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.965 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.965 [27/37] Linking target samples/client 00:01:47.965 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.965 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.965 [30/37] Linking target test/unit_tests 00:01:47.965 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:48.222 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:48.222 [33/37] Linking target samples/lspci 00:01:48.222 [34/37] Linking target samples/null 00:01:48.222 [35/37] Linking target samples/server 00:01:48.222 [36/37] Linking target samples/gpio-pci-idio-16 00:01:48.222 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:48.222 INFO: autodetecting backend as ninja 00:01:48.222 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.222 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.787 ninja: no work to do. 00:01:54.068 The Meson build system 00:01:54.068 Version: 1.5.0 00:01:54.068 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.068 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.068 Build type: native build 00:01:54.068 Program cat found: YES (/usr/bin/cat) 00:01:54.068 Project name: DPDK 00:01:54.068 Project version: 24.03.0 00:01:54.068 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.068 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.068 Host machine cpu family: x86_64 00:01:54.068 Host machine cpu: x86_64 00:01:54.068 Message: ## Building in Developer Mode ## 00:01:54.068 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.068 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.068 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.068 Program python3 found: YES (/usr/bin/python3) 00:01:54.068 Program cat found: YES (/usr/bin/cat) 00:01:54.068 Compiler for C supports arguments -march=native: YES 00:01:54.068 Checking for size of "void *" : 8 00:01:54.068 Checking for size of "void *" : 8 (cached) 00:01:54.068 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.068 Library m found: YES 00:01:54.068 Library numa found: YES 00:01:54.068 Has header "numaif.h" : YES 00:01:54.068 Library fdt found: NO 00:01:54.068 Library execinfo found: NO 00:01:54.068 Has header "execinfo.h" : YES 00:01:54.068 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.068 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.068 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.068 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.068 Run-time dependency openssl found: YES 3.1.1 00:01:54.068 Run-time dependency libpcap found: YES 1.10.4 00:01:54.068 Has header "pcap.h" with dependency libpcap: YES 00:01:54.068 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.068 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.068 Compiler for C supports arguments -Wformat: YES 00:01:54.068 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.068 Compiler for C supports arguments -Wformat-security: NO 00:01:54.068 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.068 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.068 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.068 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.068 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.068 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.068 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.068 Compiler for C supports arguments -Wundef: YES 00:01:54.068 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.068 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.068 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.068 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.068 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.068 Program objdump found: YES (/usr/bin/objdump) 00:01:54.068 Compiler for C supports arguments -mavx512f: YES 00:01:54.068 Checking if "AVX512 checking" compiles: YES 00:01:54.068 Fetching value of define "__SSE4_2__" : 1 00:01:54.068 Fetching value of define "__AES__" : 1 00:01:54.068 Fetching value of define "__AVX__" : 1 00:01:54.068 Fetching value of define "__AVX2__" : 1 00:01:54.068 Fetching value of define "__AVX512BW__" : 1 00:01:54.068 Fetching value of define "__AVX512CD__" : 1 00:01:54.068 Fetching value of define "__AVX512DQ__" : 1 00:01:54.068 Fetching value of define "__AVX512F__" : 1 00:01:54.068 Fetching value of define "__AVX512VL__" : 1 00:01:54.068 Fetching value of define "__PCLMUL__" : 1 00:01:54.068 Fetching value of define "__RDRND__" : 1 00:01:54.068 Fetching value of define "__RDSEED__" : 1 00:01:54.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.068 Fetching value of define "__znver1__" : (undefined) 00:01:54.068 Fetching value of define "__znver2__" : (undefined) 00:01:54.068 Fetching value of define "__znver3__" : (undefined) 00:01:54.068 Fetching value of define "__znver4__" : (undefined) 00:01:54.068 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.068 Message: lib/log: Defining dependency "log" 00:01:54.068 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.068 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.068 Checking for function "getentropy" : NO 00:01:54.068 Message: lib/eal: Defining dependency "eal" 00:01:54.068 Message: lib/ring: Defining dependency "ring" 00:01:54.068 Message: lib/rcu: Defining dependency "rcu" 00:01:54.068 Message: lib/mempool: Defining dependency "mempool" 00:01:54.068 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.068 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.068 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.068 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.068 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.068 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.068 Compiler for C supports arguments -mpclmul: YES 00:01:54.068 Compiler for C supports arguments -maes: YES 00:01:54.068 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.068 Compiler for C supports arguments -mavx512bw: YES 00:01:54.068 Compiler for C supports arguments -mavx512dq: YES 00:01:54.068 Compiler for C supports arguments -mavx512vl: YES 00:01:54.068 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.068 Compiler for C supports arguments -mavx2: YES 00:01:54.068 Compiler for C supports arguments -mavx: YES 00:01:54.068 Message: lib/net: Defining dependency "net" 00:01:54.068 Message: lib/meter: Defining dependency "meter" 00:01:54.068 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.068 Message: lib/pci: Defining dependency "pci" 00:01:54.068 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.068 Message: lib/hash: Defining dependency "hash" 00:01:54.068 Message: lib/timer: Defining dependency "timer" 00:01:54.068 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.068 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.068 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.068 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.068 Message: lib/power: Defining dependency "power" 00:01:54.068 Message: lib/reorder: Defining dependency "reorder" 00:01:54.068 Message: lib/security: Defining dependency "security" 00:01:54.068 Has header "linux/userfaultfd.h" : YES 00:01:54.068 Has header "linux/vduse.h" : YES 00:01:54.068 Message: lib/vhost: Defining dependency "vhost" 00:01:54.068 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.068 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.068 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.068 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.068 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.068 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.068 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.068 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.068 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.068 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.068 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.068 Configuring doxy-api-html.conf using configuration 00:01:54.068 Configuring doxy-api-man.conf using configuration 00:01:54.068 Program mandb found: YES (/usr/bin/mandb) 00:01:54.068 Program sphinx-build found: NO 00:01:54.068 Configuring rte_build_config.h using configuration 00:01:54.068 Message: 00:01:54.068 ================= 00:01:54.068 Applications Enabled 00:01:54.068 ================= 00:01:54.068 00:01:54.068 apps: 00:01:54.068 00:01:54.068 00:01:54.068 Message: 00:01:54.068 ================= 00:01:54.068 Libraries Enabled 00:01:54.068 ================= 00:01:54.068 00:01:54.068 libs: 00:01:54.068 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.068 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.068 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.068 00:01:54.068 Message: 00:01:54.068 =============== 00:01:54.068 Drivers Enabled 00:01:54.068 =============== 00:01:54.068 00:01:54.068 common: 00:01:54.068 00:01:54.068 bus: 00:01:54.068 pci, vdev, 00:01:54.068 mempool: 00:01:54.068 ring, 00:01:54.068 dma: 00:01:54.069 00:01:54.069 net: 00:01:54.069 00:01:54.069 crypto: 00:01:54.069 00:01:54.069 compress: 00:01:54.069 00:01:54.069 vdpa: 00:01:54.069 00:01:54.069 00:01:54.069 Message: 00:01:54.069 ================= 00:01:54.069 Content Skipped 00:01:54.069 ================= 00:01:54.069 00:01:54.069 apps: 00:01:54.069 dumpcap: explicitly disabled via build config 00:01:54.069 graph: explicitly disabled via build config 00:01:54.069 pdump: explicitly disabled via build config 00:01:54.069 proc-info: explicitly disabled via build config 00:01:54.069 test-acl: explicitly disabled via build config 00:01:54.069 test-bbdev: explicitly disabled via build config 00:01:54.069 test-cmdline: explicitly disabled via build config 00:01:54.069 test-compress-perf: explicitly disabled via build config 00:01:54.069 test-crypto-perf: explicitly disabled via build config 00:01:54.069 test-dma-perf: explicitly disabled via build config 00:01:54.069 test-eventdev: explicitly disabled via build config 00:01:54.069 test-fib: explicitly disabled via build config 00:01:54.069 test-flow-perf: explicitly disabled via build config 00:01:54.069 test-gpudev: explicitly disabled via build config 00:01:54.069 test-mldev: explicitly disabled via build config 00:01:54.069 test-pipeline: explicitly disabled via build config 00:01:54.069 test-pmd: explicitly disabled via build config 00:01:54.069 test-regex: explicitly disabled via build config 00:01:54.069 test-sad: explicitly disabled via build config 00:01:54.069 test-security-perf: explicitly disabled via build config 00:01:54.069 00:01:54.069 libs: 00:01:54.069 argparse: explicitly disabled via build config 00:01:54.069 metrics: explicitly disabled via build config 00:01:54.069 acl: explicitly disabled via build config 00:01:54.069 bbdev: explicitly disabled via build config 00:01:54.069 bitratestats: explicitly disabled via build config 00:01:54.069 bpf: explicitly disabled via build config 00:01:54.069 cfgfile: explicitly disabled via build config 00:01:54.069 distributor: explicitly disabled via build config 00:01:54.069 efd: explicitly disabled via build config 00:01:54.069 eventdev: explicitly disabled via build config 00:01:54.069 dispatcher: explicitly disabled via build config 00:01:54.069 gpudev: explicitly disabled via build config 00:01:54.069 gro: explicitly disabled via build config 00:01:54.069 gso: explicitly disabled via build config 00:01:54.069 ip_frag: explicitly disabled via build config 00:01:54.069 jobstats: explicitly disabled via build config 00:01:54.069 latencystats: explicitly disabled via build config 00:01:54.069 lpm: explicitly disabled via build config 00:01:54.069 member: explicitly disabled via build config 00:01:54.069 pcapng: explicitly disabled via build config 00:01:54.069 rawdev: explicitly disabled via build config 00:01:54.069 regexdev: explicitly disabled via build config 00:01:54.069 mldev: explicitly disabled via build config 00:01:54.069 rib: explicitly disabled via build config 00:01:54.069 sched: explicitly disabled via build config 00:01:54.069 stack: explicitly disabled via build config 00:01:54.069 ipsec: explicitly disabled via build config 00:01:54.069 pdcp: explicitly disabled via build config 00:01:54.069 fib: explicitly disabled via build config 00:01:54.069 port: explicitly disabled via build config 00:01:54.069 pdump: explicitly disabled via build config 00:01:54.069 table: explicitly disabled via build config 00:01:54.069 pipeline: explicitly disabled via build config 00:01:54.069 graph: explicitly disabled via build config 00:01:54.069 node: explicitly disabled via build config 00:01:54.069 00:01:54.069 drivers: 00:01:54.069 common/cpt: not in enabled drivers build config 00:01:54.069 common/dpaax: not in enabled drivers build config 00:01:54.069 common/iavf: not in enabled drivers build config 00:01:54.069 common/idpf: not in enabled drivers build config 00:01:54.069 common/ionic: not in enabled drivers build config 00:01:54.069 common/mvep: not in enabled drivers build config 00:01:54.069 common/octeontx: not in enabled drivers build config 00:01:54.069 bus/auxiliary: not in enabled drivers build config 00:01:54.069 bus/cdx: not in enabled drivers build config 00:01:54.069 bus/dpaa: not in enabled drivers build config 00:01:54.069 bus/fslmc: not in enabled drivers build config 00:01:54.069 bus/ifpga: not in enabled drivers build config 00:01:54.069 bus/platform: not in enabled drivers build config 00:01:54.069 bus/uacce: not in enabled drivers build config 00:01:54.069 bus/vmbus: not in enabled drivers build config 00:01:54.069 common/cnxk: not in enabled drivers build config 00:01:54.069 common/mlx5: not in enabled drivers build config 00:01:54.069 common/nfp: not in enabled drivers build config 00:01:54.069 common/nitrox: not in enabled drivers build config 00:01:54.069 common/qat: not in enabled drivers build config 00:01:54.069 common/sfc_efx: not in enabled drivers build config 00:01:54.069 mempool/bucket: not in enabled drivers build config 00:01:54.069 mempool/cnxk: not in enabled drivers build config 00:01:54.069 mempool/dpaa: not in enabled drivers build config 00:01:54.069 mempool/dpaa2: not in enabled drivers build config 00:01:54.069 mempool/octeontx: not in enabled drivers build config 00:01:54.069 mempool/stack: not in enabled drivers build config 00:01:54.069 dma/cnxk: not in enabled drivers build config 00:01:54.069 dma/dpaa: not in enabled drivers build config 00:01:54.069 dma/dpaa2: not in enabled drivers build config 00:01:54.069 dma/hisilicon: not in enabled drivers build config 00:01:54.069 dma/idxd: not in enabled drivers build config 00:01:54.069 dma/ioat: not in enabled drivers build config 00:01:54.069 dma/skeleton: not in enabled drivers build config 00:01:54.069 net/af_packet: not in enabled drivers build config 00:01:54.069 net/af_xdp: not in enabled drivers build config 00:01:54.069 net/ark: not in enabled drivers build config 00:01:54.069 net/atlantic: not in enabled drivers build config 00:01:54.069 net/avp: not in enabled drivers build config 00:01:54.069 net/axgbe: not in enabled drivers build config 00:01:54.069 net/bnx2x: not in enabled drivers build config 00:01:54.069 net/bnxt: not in enabled drivers build config 00:01:54.069 net/bonding: not in enabled drivers build config 00:01:54.069 net/cnxk: not in enabled drivers build config 00:01:54.069 net/cpfl: not in enabled drivers build config 00:01:54.069 net/cxgbe: not in enabled drivers build config 00:01:54.069 net/dpaa: not in enabled drivers build config 00:01:54.069 net/dpaa2: not in enabled drivers build config 00:01:54.069 net/e1000: not in enabled drivers build config 00:01:54.069 net/ena: not in enabled drivers build config 00:01:54.069 net/enetc: not in enabled drivers build config 00:01:54.069 net/enetfec: not in enabled drivers build config 00:01:54.069 net/enic: not in enabled drivers build config 00:01:54.069 net/failsafe: not in enabled drivers build config 00:01:54.069 net/fm10k: not in enabled drivers build config 00:01:54.069 net/gve: not in enabled drivers build config 00:01:54.069 net/hinic: not in enabled drivers build config 00:01:54.069 net/hns3: not in enabled drivers build config 00:01:54.069 net/i40e: not in enabled drivers build config 00:01:54.069 net/iavf: not in enabled drivers build config 00:01:54.069 net/ice: not in enabled drivers build config 00:01:54.069 net/idpf: not in enabled drivers build config 00:01:54.069 net/igc: not in enabled drivers build config 00:01:54.069 net/ionic: not in enabled drivers build config 00:01:54.069 net/ipn3ke: not in enabled drivers build config 00:01:54.069 net/ixgbe: not in enabled drivers build config 00:01:54.069 net/mana: not in enabled drivers build config 00:01:54.069 net/memif: not in enabled drivers build config 00:01:54.069 net/mlx4: not in enabled drivers build config 00:01:54.069 net/mlx5: not in enabled drivers build config 00:01:54.069 net/mvneta: not in enabled drivers build config 00:01:54.069 net/mvpp2: not in enabled drivers build config 00:01:54.069 net/netvsc: not in enabled drivers build config 00:01:54.069 net/nfb: not in enabled drivers build config 00:01:54.069 net/nfp: not in enabled drivers build config 00:01:54.069 net/ngbe: not in enabled drivers build config 00:01:54.069 net/null: not in enabled drivers build config 00:01:54.069 net/octeontx: not in enabled drivers build config 00:01:54.069 net/octeon_ep: not in enabled drivers build config 00:01:54.069 net/pcap: not in enabled drivers build config 00:01:54.069 net/pfe: not in enabled drivers build config 00:01:54.069 net/qede: not in enabled drivers build config 00:01:54.069 net/ring: not in enabled drivers build config 00:01:54.069 net/sfc: not in enabled drivers build config 00:01:54.069 net/softnic: not in enabled drivers build config 00:01:54.069 net/tap: not in enabled drivers build config 00:01:54.069 net/thunderx: not in enabled drivers build config 00:01:54.069 net/txgbe: not in enabled drivers build config 00:01:54.069 net/vdev_netvsc: not in enabled drivers build config 00:01:54.069 net/vhost: not in enabled drivers build config 00:01:54.069 net/virtio: not in enabled drivers build config 00:01:54.069 net/vmxnet3: not in enabled drivers build config 00:01:54.069 raw/*: missing internal dependency, "rawdev" 00:01:54.069 crypto/armv8: not in enabled drivers build config 00:01:54.069 crypto/bcmfs: not in enabled drivers build config 00:01:54.069 crypto/caam_jr: not in enabled drivers build config 00:01:54.069 crypto/ccp: not in enabled drivers build config 00:01:54.069 crypto/cnxk: not in enabled drivers build config 00:01:54.069 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.069 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.069 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.069 crypto/mlx5: not in enabled drivers build config 00:01:54.069 crypto/mvsam: not in enabled drivers build config 00:01:54.069 crypto/nitrox: not in enabled drivers build config 00:01:54.069 crypto/null: not in enabled drivers build config 00:01:54.069 crypto/octeontx: not in enabled drivers build config 00:01:54.069 crypto/openssl: not in enabled drivers build config 00:01:54.069 crypto/scheduler: not in enabled drivers build config 00:01:54.069 crypto/uadk: not in enabled drivers build config 00:01:54.069 crypto/virtio: not in enabled drivers build config 00:01:54.069 compress/isal: not in enabled drivers build config 00:01:54.069 compress/mlx5: not in enabled drivers build config 00:01:54.069 compress/nitrox: not in enabled drivers build config 00:01:54.069 compress/octeontx: not in enabled drivers build config 00:01:54.069 compress/zlib: not in enabled drivers build config 00:01:54.069 regex/*: missing internal dependency, "regexdev" 00:01:54.069 ml/*: missing internal dependency, "mldev" 00:01:54.069 vdpa/ifc: not in enabled drivers build config 00:01:54.070 vdpa/mlx5: not in enabled drivers build config 00:01:54.070 vdpa/nfp: not in enabled drivers build config 00:01:54.070 vdpa/sfc: not in enabled drivers build config 00:01:54.070 event/*: missing internal dependency, "eventdev" 00:01:54.070 baseband/*: missing internal dependency, "bbdev" 00:01:54.070 gpu/*: missing internal dependency, "gpudev" 00:01:54.070 00:01:54.070 00:01:54.070 Build targets in project: 85 00:01:54.070 00:01:54.070 DPDK 24.03.0 00:01:54.070 00:01:54.070 User defined options 00:01:54.070 buildtype : debug 00:01:54.070 default_library : shared 00:01:54.070 libdir : lib 00:01:54.070 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.070 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.070 c_link_args : 00:01:54.070 cpu_instruction_set: native 00:01:54.070 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:54.070 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:54.070 enable_docs : false 00:01:54.070 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:54.070 enable_kmods : false 00:01:54.070 max_lcores : 128 00:01:54.070 tests : false 00:01:54.070 00:01:54.070 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.070 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.338 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.338 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.338 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.338 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.338 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.338 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.338 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.338 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.338 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.338 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.338 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.338 [12/268] Linking static target lib/librte_kvargs.a 00:01:54.338 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.338 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.338 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.338 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.338 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.338 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.600 [19/268] Linking static target lib/librte_log.a 00:01:54.600 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.600 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.600 [22/268] Linking static target lib/librte_pci.a 00:01:54.600 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.600 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.863 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.863 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.863 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.863 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.863 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.863 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.863 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.863 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.863 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.863 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.863 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.863 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.863 [37/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.863 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.863 [39/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.863 [40/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:54.863 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.863 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.863 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.863 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.863 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.863 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.863 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.863 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.863 [49/268] Linking static target lib/librte_meter.a 00:01:54.863 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.863 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.863 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.863 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.863 [54/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:54.863 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.863 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.863 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.863 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.863 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.863 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.863 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.863 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:54.863 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.863 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.863 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.863 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.863 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.863 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.863 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.863 [70/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.863 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.863 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.863 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.863 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.863 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.863 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.863 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.863 [78/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.863 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.863 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.863 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.863 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.863 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.863 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.127 [85/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.127 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.127 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.127 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.127 [89/268] Linking static target lib/librte_ring.a 00:01:55.127 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.127 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.127 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.127 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.127 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.127 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.127 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.127 [97/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.127 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.127 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.127 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.127 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.127 [102/268] Linking static target lib/librte_telemetry.a 00:01:55.127 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.127 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.127 [105/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.127 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.127 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.127 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.127 [109/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.127 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.127 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.127 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.127 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.127 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.127 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.127 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.127 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.127 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.127 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.127 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.127 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.127 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.127 [123/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.127 [124/268] Linking static target lib/librte_rcu.a 00:01:55.127 [125/268] Linking static target lib/librte_net.a 00:01:55.127 [126/268] Linking static target lib/librte_mempool.a 00:01:55.127 [127/268] Linking static target lib/librte_eal.a 00:01:55.127 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.127 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.127 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.127 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.127 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.127 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.127 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.386 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.386 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.386 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.386 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.386 [139/268] Linking target lib/librte_log.so.24.1 00:01:55.386 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.386 [141/268] Linking static target lib/librte_mbuf.a 00:01:55.386 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.386 [143/268] Linking static target lib/librte_cmdline.a 00:01:55.386 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.386 [145/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.386 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.386 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.386 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.386 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.386 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.386 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.386 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.386 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.386 [154/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.386 [155/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.386 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.386 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.386 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.386 [159/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.386 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.386 [161/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.386 [162/268] Linking static target lib/librte_timer.a 00:01:55.386 [163/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.386 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.386 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.386 [166/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.386 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.386 [168/268] Linking static target lib/librte_dmadev.a 00:01:55.386 [169/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.386 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.386 [171/268] Linking target lib/librte_telemetry.so.24.1 00:01:55.386 [172/268] Linking static target lib/librte_security.a 00:01:55.386 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.645 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.645 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.645 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.645 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.645 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.645 [179/268] Linking static target lib/librte_power.a 00:01:55.645 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.645 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.645 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.645 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.645 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.645 [185/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.645 [186/268] Linking static target lib/librte_compressdev.a 00:01:55.645 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:55.645 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.645 [189/268] Linking static target lib/librte_hash.a 00:01:55.645 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.645 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.645 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.645 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:55.645 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.645 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.646 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.646 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.646 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.646 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.646 [200/268] Linking static target lib/librte_reorder.a 00:01:55.646 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:55.646 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:55.904 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.904 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.904 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:55.904 [206/268] Linking static target drivers/librte_bus_pci.a 00:01:55.904 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.904 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.904 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.904 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:55.904 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:55.904 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.904 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.904 [214/268] Linking static target lib/librte_cryptodev.a 00:01:55.904 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.163 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.163 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.163 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.163 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.163 [220/268] Linking static target lib/librte_ethdev.a 00:01:56.163 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.422 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.422 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.422 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.422 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.422 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.680 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.615 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.615 [229/268] Linking static target lib/librte_vhost.a 00:01:57.873 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.310 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.583 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.151 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.151 [234/268] Linking target lib/librte_eal.so.24.1 00:02:05.409 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.409 [236/268] Linking target lib/librte_ring.so.24.1 00:02:05.409 [237/268] Linking target lib/librte_timer.so.24.1 00:02:05.409 [238/268] Linking target lib/librte_meter.so.24.1 00:02:05.409 [239/268] Linking target lib/librte_pci.so.24.1 00:02:05.409 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:05.409 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.409 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.409 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.409 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.409 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.409 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.667 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.667 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:05.667 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:05.667 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.667 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.667 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:05.667 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.926 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.926 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:05.926 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:05.926 [257/268] Linking target lib/librte_net.so.24.1 00:02:05.926 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:05.926 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.926 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.185 [261/268] Linking target lib/librte_hash.so.24.1 00:02:06.185 [262/268] Linking target lib/librte_security.so.24.1 00:02:06.185 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:06.185 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:06.185 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.185 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.185 [267/268] Linking target lib/librte_power.so.24.1 00:02:06.185 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:06.444 INFO: autodetecting backend as ninja 00:02:06.444 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:16.420 CC lib/log/log.o 00:02:16.420 CC lib/log/log_flags.o 00:02:16.420 CC lib/log/log_deprecated.o 00:02:16.420 CC lib/ut/ut.o 00:02:16.679 CC lib/ut_mock/mock.o 00:02:16.679 LIB libspdk_ut.a 00:02:16.679 LIB libspdk_ut_mock.a 00:02:16.679 LIB libspdk_log.a 00:02:16.679 SO libspdk_ut.so.2.0 00:02:16.679 SO libspdk_ut_mock.so.6.0 00:02:16.679 SO libspdk_log.so.7.1 00:02:16.679 SYMLINK libspdk_ut.so 00:02:16.679 SYMLINK libspdk_ut_mock.so 00:02:16.938 SYMLINK libspdk_log.so 00:02:17.196 CC lib/dma/dma.o 00:02:17.196 CXX lib/trace_parser/trace.o 00:02:17.196 CC lib/util/base64.o 00:02:17.196 CC lib/util/cpuset.o 00:02:17.196 CC lib/util/bit_array.o 00:02:17.196 CC lib/ioat/ioat.o 00:02:17.196 CC lib/util/crc16.o 00:02:17.196 CC lib/util/crc32.o 00:02:17.196 CC lib/util/crc32c.o 00:02:17.196 CC lib/util/crc32_ieee.o 00:02:17.196 CC lib/util/crc64.o 00:02:17.196 CC lib/util/dif.o 00:02:17.196 CC lib/util/fd.o 00:02:17.196 CC lib/util/fd_group.o 00:02:17.196 CC lib/util/file.o 00:02:17.196 CC lib/util/hexlify.o 00:02:17.196 CC lib/util/iov.o 00:02:17.196 CC lib/util/math.o 00:02:17.196 CC lib/util/pipe.o 00:02:17.196 CC lib/util/net.o 00:02:17.196 CC lib/util/strerror_tls.o 00:02:17.196 CC lib/util/string.o 00:02:17.196 CC lib/util/uuid.o 00:02:17.196 CC lib/util/xor.o 00:02:17.196 CC lib/util/zipf.o 00:02:17.196 CC lib/util/md5.o 00:02:17.196 CC lib/vfio_user/host/vfio_user.o 00:02:17.196 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.454 LIB libspdk_dma.a 00:02:17.454 SO libspdk_dma.so.5.0 00:02:17.454 LIB libspdk_ioat.a 00:02:17.454 SYMLINK libspdk_dma.so 00:02:17.454 SO libspdk_ioat.so.7.0 00:02:17.454 SYMLINK libspdk_ioat.so 00:02:17.454 LIB libspdk_vfio_user.a 00:02:17.454 SO libspdk_vfio_user.so.5.0 00:02:17.712 SYMLINK libspdk_vfio_user.so 00:02:17.712 LIB libspdk_util.a 00:02:17.712 SO libspdk_util.so.10.1 00:02:17.712 SYMLINK libspdk_util.so 00:02:17.971 LIB libspdk_trace_parser.a 00:02:17.971 SO libspdk_trace_parser.so.6.0 00:02:17.971 SYMLINK libspdk_trace_parser.so 00:02:18.229 CC lib/conf/conf.o 00:02:18.229 CC lib/idxd/idxd.o 00:02:18.229 CC lib/rdma_utils/rdma_utils.o 00:02:18.229 CC lib/idxd/idxd_user.o 00:02:18.229 CC lib/idxd/idxd_kernel.o 00:02:18.229 CC lib/json/json_parse.o 00:02:18.229 CC lib/json/json_util.o 00:02:18.229 CC lib/json/json_write.o 00:02:18.229 CC lib/env_dpdk/env.o 00:02:18.229 CC lib/vmd/vmd.o 00:02:18.229 CC lib/env_dpdk/memory.o 00:02:18.229 CC lib/vmd/led.o 00:02:18.229 CC lib/env_dpdk/pci.o 00:02:18.229 CC lib/env_dpdk/init.o 00:02:18.229 CC lib/env_dpdk/threads.o 00:02:18.229 CC lib/env_dpdk/pci_ioat.o 00:02:18.229 CC lib/env_dpdk/pci_virtio.o 00:02:18.229 CC lib/env_dpdk/pci_vmd.o 00:02:18.229 CC lib/env_dpdk/pci_idxd.o 00:02:18.229 CC lib/env_dpdk/pci_event.o 00:02:18.229 CC lib/env_dpdk/pci_dpdk.o 00:02:18.229 CC lib/env_dpdk/sigbus_handler.o 00:02:18.229 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.229 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.488 LIB libspdk_conf.a 00:02:18.488 SO libspdk_conf.so.6.0 00:02:18.488 LIB libspdk_rdma_utils.a 00:02:18.488 SYMLINK libspdk_conf.so 00:02:18.488 SO libspdk_rdma_utils.so.1.0 00:02:18.488 LIB libspdk_json.a 00:02:18.488 SO libspdk_json.so.6.0 00:02:18.488 SYMLINK libspdk_rdma_utils.so 00:02:18.488 SYMLINK libspdk_json.so 00:02:18.746 LIB libspdk_idxd.a 00:02:18.746 LIB libspdk_vmd.a 00:02:18.746 SO libspdk_idxd.so.12.1 00:02:18.746 SO libspdk_vmd.so.6.0 00:02:18.746 SYMLINK libspdk_idxd.so 00:02:18.746 SYMLINK libspdk_vmd.so 00:02:18.746 CC lib/rdma_provider/common.o 00:02:18.746 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:18.746 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.746 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.746 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.746 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.004 LIB libspdk_rdma_provider.a 00:02:19.004 SO libspdk_rdma_provider.so.7.0 00:02:19.004 LIB libspdk_jsonrpc.a 00:02:19.004 SYMLINK libspdk_rdma_provider.so 00:02:19.004 SO libspdk_jsonrpc.so.6.0 00:02:19.263 SYMLINK libspdk_jsonrpc.so 00:02:19.263 LIB libspdk_env_dpdk.a 00:02:19.263 SO libspdk_env_dpdk.so.15.1 00:02:19.263 SYMLINK libspdk_env_dpdk.so 00:02:19.522 CC lib/rpc/rpc.o 00:02:19.522 LIB libspdk_rpc.a 00:02:19.781 SO libspdk_rpc.so.6.0 00:02:19.781 SYMLINK libspdk_rpc.so 00:02:20.041 CC lib/notify/notify.o 00:02:20.041 CC lib/notify/notify_rpc.o 00:02:20.041 CC lib/keyring/keyring.o 00:02:20.041 CC lib/keyring/keyring_rpc.o 00:02:20.041 CC lib/trace/trace.o 00:02:20.041 CC lib/trace/trace_flags.o 00:02:20.041 CC lib/trace/trace_rpc.o 00:02:20.299 LIB libspdk_notify.a 00:02:20.299 SO libspdk_notify.so.6.0 00:02:20.299 LIB libspdk_keyring.a 00:02:20.299 LIB libspdk_trace.a 00:02:20.299 SO libspdk_keyring.so.2.0 00:02:20.299 SYMLINK libspdk_notify.so 00:02:20.299 SO libspdk_trace.so.11.0 00:02:20.299 SYMLINK libspdk_keyring.so 00:02:20.299 SYMLINK libspdk_trace.so 00:02:20.558 CC lib/sock/sock.o 00:02:20.558 CC lib/thread/thread.o 00:02:20.558 CC lib/sock/sock_rpc.o 00:02:20.558 CC lib/thread/iobuf.o 00:02:21.126 LIB libspdk_sock.a 00:02:21.126 SO libspdk_sock.so.10.0 00:02:21.126 SYMLINK libspdk_sock.so 00:02:21.383 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.383 CC lib/nvme/nvme_ctrlr.o 00:02:21.383 CC lib/nvme/nvme_fabric.o 00:02:21.383 CC lib/nvme/nvme_ns_cmd.o 00:02:21.383 CC lib/nvme/nvme_ns.o 00:02:21.383 CC lib/nvme/nvme_pcie_common.o 00:02:21.383 CC lib/nvme/nvme_pcie.o 00:02:21.383 CC lib/nvme/nvme_qpair.o 00:02:21.383 CC lib/nvme/nvme.o 00:02:21.383 CC lib/nvme/nvme_quirks.o 00:02:21.383 CC lib/nvme/nvme_transport.o 00:02:21.383 CC lib/nvme/nvme_discovery.o 00:02:21.383 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.383 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.383 CC lib/nvme/nvme_tcp.o 00:02:21.383 CC lib/nvme/nvme_opal.o 00:02:21.383 CC lib/nvme/nvme_io_msg.o 00:02:21.383 CC lib/nvme/nvme_poll_group.o 00:02:21.383 CC lib/nvme/nvme_zns.o 00:02:21.383 CC lib/nvme/nvme_stubs.o 00:02:21.383 CC lib/nvme/nvme_auth.o 00:02:21.383 CC lib/nvme/nvme_cuse.o 00:02:21.383 CC lib/nvme/nvme_vfio_user.o 00:02:21.383 CC lib/nvme/nvme_rdma.o 00:02:21.641 LIB libspdk_thread.a 00:02:21.899 SO libspdk_thread.so.11.0 00:02:21.899 SYMLINK libspdk_thread.so 00:02:22.156 CC lib/virtio/virtio.o 00:02:22.156 CC lib/virtio/virtio_vhost_user.o 00:02:22.156 CC lib/virtio/virtio_pci.o 00:02:22.156 CC lib/virtio/virtio_vfio_user.o 00:02:22.156 CC lib/accel/accel_rpc.o 00:02:22.156 CC lib/accel/accel.o 00:02:22.156 CC lib/accel/accel_sw.o 00:02:22.156 CC lib/blob/blobstore.o 00:02:22.156 CC lib/blob/zeroes.o 00:02:22.156 CC lib/blob/request.o 00:02:22.156 CC lib/blob/blob_bs_dev.o 00:02:22.156 CC lib/vfu_tgt/tgt_endpoint.o 00:02:22.156 CC lib/vfu_tgt/tgt_rpc.o 00:02:22.156 CC lib/init/json_config.o 00:02:22.156 CC lib/init/subsystem.o 00:02:22.156 CC lib/init/subsystem_rpc.o 00:02:22.156 CC lib/init/rpc.o 00:02:22.156 CC lib/fsdev/fsdev.o 00:02:22.156 CC lib/fsdev/fsdev_io.o 00:02:22.156 CC lib/fsdev/fsdev_rpc.o 00:02:22.414 LIB libspdk_init.a 00:02:22.414 SO libspdk_init.so.6.0 00:02:22.414 LIB libspdk_virtio.a 00:02:22.414 LIB libspdk_vfu_tgt.a 00:02:22.414 SO libspdk_virtio.so.7.0 00:02:22.414 SO libspdk_vfu_tgt.so.3.0 00:02:22.414 SYMLINK libspdk_init.so 00:02:22.672 SYMLINK libspdk_vfu_tgt.so 00:02:22.672 SYMLINK libspdk_virtio.so 00:02:22.672 LIB libspdk_fsdev.a 00:02:22.672 SO libspdk_fsdev.so.2.0 00:02:22.672 CC lib/event/app.o 00:02:22.672 CC lib/event/reactor.o 00:02:22.672 CC lib/event/log_rpc.o 00:02:22.672 CC lib/event/app_rpc.o 00:02:22.672 CC lib/event/scheduler_static.o 00:02:22.931 SYMLINK libspdk_fsdev.so 00:02:22.931 LIB libspdk_accel.a 00:02:22.931 SO libspdk_accel.so.16.0 00:02:23.190 SYMLINK libspdk_accel.so 00:02:23.190 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:23.190 LIB libspdk_event.a 00:02:23.190 LIB libspdk_nvme.a 00:02:23.190 SO libspdk_event.so.14.0 00:02:23.190 SO libspdk_nvme.so.15.0 00:02:23.190 SYMLINK libspdk_event.so 00:02:23.448 CC lib/bdev/bdev.o 00:02:23.448 CC lib/bdev/bdev_rpc.o 00:02:23.448 CC lib/bdev/bdev_zone.o 00:02:23.448 CC lib/bdev/part.o 00:02:23.448 CC lib/bdev/scsi_nvme.o 00:02:23.448 SYMLINK libspdk_nvme.so 00:02:23.708 LIB libspdk_fuse_dispatcher.a 00:02:23.708 SO libspdk_fuse_dispatcher.so.1.0 00:02:23.708 SYMLINK libspdk_fuse_dispatcher.so 00:02:24.275 LIB libspdk_blob.a 00:02:24.275 SO libspdk_blob.so.11.0 00:02:24.534 SYMLINK libspdk_blob.so 00:02:24.793 CC lib/blobfs/blobfs.o 00:02:24.793 CC lib/blobfs/tree.o 00:02:24.793 CC lib/lvol/lvol.o 00:02:25.359 LIB libspdk_bdev.a 00:02:25.359 SO libspdk_bdev.so.17.0 00:02:25.359 LIB libspdk_blobfs.a 00:02:25.359 SYMLINK libspdk_bdev.so 00:02:25.359 SO libspdk_blobfs.so.10.0 00:02:25.359 LIB libspdk_lvol.a 00:02:25.359 SO libspdk_lvol.so.10.0 00:02:25.359 SYMLINK libspdk_blobfs.so 00:02:25.619 SYMLINK libspdk_lvol.so 00:02:25.619 CC lib/nbd/nbd.o 00:02:25.620 CC lib/nbd/nbd_rpc.o 00:02:25.620 CC lib/scsi/dev.o 00:02:25.620 CC lib/scsi/lun.o 00:02:25.620 CC lib/scsi/port.o 00:02:25.620 CC lib/scsi/scsi.o 00:02:25.620 CC lib/scsi/scsi_bdev.o 00:02:25.620 CC lib/scsi/scsi_pr.o 00:02:25.620 CC lib/scsi/scsi_rpc.o 00:02:25.620 CC lib/nvmf/ctrlr.o 00:02:25.620 CC lib/scsi/task.o 00:02:25.620 CC lib/ftl/ftl_core.o 00:02:25.620 CC lib/nvmf/ctrlr_discovery.o 00:02:25.620 CC lib/nvmf/ctrlr_bdev.o 00:02:25.620 CC lib/ftl/ftl_init.o 00:02:25.620 CC lib/nvmf/subsystem.o 00:02:25.620 CC lib/ftl/ftl_layout.o 00:02:25.620 CC lib/nvmf/nvmf.o 00:02:25.620 CC lib/ublk/ublk.o 00:02:25.620 CC lib/ftl/ftl_debug.o 00:02:25.620 CC lib/ftl/ftl_io.o 00:02:25.620 CC lib/nvmf/nvmf_rpc.o 00:02:25.620 CC lib/nvmf/transport.o 00:02:25.620 CC lib/ftl/ftl_sb.o 00:02:25.620 CC lib/ublk/ublk_rpc.o 00:02:25.620 CC lib/nvmf/tcp.o 00:02:25.620 CC lib/ftl/ftl_l2p.o 00:02:25.620 CC lib/ftl/ftl_nv_cache.o 00:02:25.620 CC lib/nvmf/stubs.o 00:02:25.620 CC lib/ftl/ftl_l2p_flat.o 00:02:25.620 CC lib/nvmf/mdns_server.o 00:02:25.620 CC lib/nvmf/vfio_user.o 00:02:25.620 CC lib/ftl/ftl_band.o 00:02:25.620 CC lib/ftl/ftl_band_ops.o 00:02:25.620 CC lib/nvmf/rdma.o 00:02:25.620 CC lib/ftl/ftl_writer.o 00:02:25.620 CC lib/ftl/ftl_reloc.o 00:02:25.620 CC lib/ftl/ftl_rq.o 00:02:25.620 CC lib/nvmf/auth.o 00:02:25.620 CC lib/ftl/ftl_l2p_cache.o 00:02:25.620 CC lib/ftl/ftl_p2l.o 00:02:25.620 CC lib/ftl/ftl_p2l_log.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:25.620 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:25.878 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:25.878 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:25.878 CC lib/ftl/utils/ftl_conf.o 00:02:25.878 CC lib/ftl/utils/ftl_md.o 00:02:25.878 CC lib/ftl/utils/ftl_mempool.o 00:02:25.878 CC lib/ftl/utils/ftl_bitmap.o 00:02:25.878 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:25.878 CC lib/ftl/utils/ftl_property.o 00:02:25.878 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:25.878 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:25.878 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:25.878 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:25.878 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:25.878 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:25.878 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:25.878 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:25.878 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:25.878 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:25.878 CC lib/ftl/base/ftl_base_dev.o 00:02:25.878 CC lib/ftl/base/ftl_base_bdev.o 00:02:25.878 CC lib/ftl/ftl_trace.o 00:02:25.878 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:25.878 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:26.445 LIB libspdk_nbd.a 00:02:26.445 SO libspdk_nbd.so.7.0 00:02:26.445 SYMLINK libspdk_nbd.so 00:02:26.445 LIB libspdk_scsi.a 00:02:26.445 SO libspdk_scsi.so.9.0 00:02:26.445 LIB libspdk_ublk.a 00:02:26.704 SO libspdk_ublk.so.3.0 00:02:26.704 SYMLINK libspdk_scsi.so 00:02:26.704 SYMLINK libspdk_ublk.so 00:02:26.704 LIB libspdk_ftl.a 00:02:26.704 SO libspdk_ftl.so.9.0 00:02:26.963 CC lib/vhost/vhost.o 00:02:26.963 CC lib/vhost/vhost_rpc.o 00:02:26.963 CC lib/vhost/vhost_scsi.o 00:02:26.963 CC lib/iscsi/conn.o 00:02:26.963 CC lib/vhost/vhost_blk.o 00:02:26.963 CC lib/iscsi/init_grp.o 00:02:26.963 CC lib/iscsi/iscsi.o 00:02:26.963 CC lib/vhost/rte_vhost_user.o 00:02:26.963 CC lib/iscsi/param.o 00:02:26.963 CC lib/iscsi/portal_grp.o 00:02:26.963 CC lib/iscsi/tgt_node.o 00:02:26.963 CC lib/iscsi/iscsi_subsystem.o 00:02:26.963 CC lib/iscsi/iscsi_rpc.o 00:02:26.963 CC lib/iscsi/task.o 00:02:26.963 SYMLINK libspdk_ftl.so 00:02:27.529 LIB libspdk_nvmf.a 00:02:27.529 SO libspdk_nvmf.so.20.0 00:02:27.788 LIB libspdk_vhost.a 00:02:27.789 SO libspdk_vhost.so.8.0 00:02:27.789 SYMLINK libspdk_nvmf.so 00:02:27.789 SYMLINK libspdk_vhost.so 00:02:28.047 LIB libspdk_iscsi.a 00:02:28.047 SO libspdk_iscsi.so.8.0 00:02:28.047 SYMLINK libspdk_iscsi.so 00:02:28.615 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.615 CC module/vfu_device/vfu_virtio_blk.o 00:02:28.615 CC module/vfu_device/vfu_virtio.o 00:02:28.615 CC module/vfu_device/vfu_virtio_scsi.o 00:02:28.615 CC module/vfu_device/vfu_virtio_rpc.o 00:02:28.615 CC module/vfu_device/vfu_virtio_fs.o 00:02:28.874 CC module/accel/iaa/accel_iaa.o 00:02:28.874 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.874 CC module/sock/posix/posix.o 00:02:28.874 LIB libspdk_env_dpdk_rpc.a 00:02:28.874 CC module/accel/error/accel_error.o 00:02:28.874 CC module/keyring/file/keyring.o 00:02:28.874 CC module/accel/dsa/accel_dsa.o 00:02:28.874 CC module/accel/error/accel_error_rpc.o 00:02:28.874 CC module/keyring/file/keyring_rpc.o 00:02:28.874 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.874 CC module/fsdev/aio/fsdev_aio.o 00:02:28.874 CC module/accel/ioat/accel_ioat.o 00:02:28.874 CC module/keyring/linux/keyring.o 00:02:28.874 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.874 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.874 CC module/blob/bdev/blob_bdev.o 00:02:28.874 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:28.874 CC module/fsdev/aio/linux_aio_mgr.o 00:02:28.874 CC module/keyring/linux/keyring_rpc.o 00:02:28.874 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.874 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.874 SO libspdk_env_dpdk_rpc.so.6.0 00:02:28.874 SYMLINK libspdk_env_dpdk_rpc.so 00:02:28.874 LIB libspdk_keyring_file.a 00:02:28.874 LIB libspdk_keyring_linux.a 00:02:28.874 LIB libspdk_scheduler_gscheduler.a 00:02:28.874 LIB libspdk_scheduler_dpdk_governor.a 00:02:28.874 LIB libspdk_accel_iaa.a 00:02:28.874 SO libspdk_keyring_file.so.2.0 00:02:28.874 SO libspdk_keyring_linux.so.1.0 00:02:28.874 LIB libspdk_accel_error.a 00:02:28.874 SO libspdk_scheduler_gscheduler.so.4.0 00:02:28.874 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:28.874 LIB libspdk_accel_ioat.a 00:02:29.133 SO libspdk_accel_iaa.so.3.0 00:02:29.133 SO libspdk_accel_error.so.2.0 00:02:29.133 SYMLINK libspdk_keyring_file.so 00:02:29.133 SO libspdk_accel_ioat.so.6.0 00:02:29.133 SYMLINK libspdk_keyring_linux.so 00:02:29.133 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.133 LIB libspdk_blob_bdev.a 00:02:29.133 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.133 SYMLINK libspdk_accel_iaa.so 00:02:29.133 LIB libspdk_accel_dsa.a 00:02:29.133 LIB libspdk_scheduler_dynamic.a 00:02:29.133 SO libspdk_blob_bdev.so.11.0 00:02:29.133 SYMLINK libspdk_accel_error.so 00:02:29.133 SO libspdk_accel_dsa.so.5.0 00:02:29.133 SYMLINK libspdk_accel_ioat.so 00:02:29.133 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.133 SYMLINK libspdk_blob_bdev.so 00:02:29.133 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.133 LIB libspdk_vfu_device.a 00:02:29.133 SYMLINK libspdk_accel_dsa.so 00:02:29.133 SO libspdk_vfu_device.so.3.0 00:02:29.392 SYMLINK libspdk_vfu_device.so 00:02:29.392 LIB libspdk_fsdev_aio.a 00:02:29.392 LIB libspdk_sock_posix.a 00:02:29.392 SO libspdk_fsdev_aio.so.1.0 00:02:29.392 SO libspdk_sock_posix.so.6.0 00:02:29.392 SYMLINK libspdk_fsdev_aio.so 00:02:29.651 SYMLINK libspdk_sock_posix.so 00:02:29.651 CC module/bdev/split/vbdev_split.o 00:02:29.651 CC module/bdev/delay/vbdev_delay.o 00:02:29.651 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.651 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.651 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.651 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.651 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.651 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.651 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.651 CC module/bdev/gpt/gpt.o 00:02:29.651 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.651 CC module/bdev/null/bdev_null.o 00:02:29.651 CC module/bdev/null/bdev_null_rpc.o 00:02:29.651 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.651 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.651 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.651 CC module/bdev/ftl/bdev_ftl.o 00:02:29.651 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.651 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.651 CC module/bdev/error/vbdev_error.o 00:02:29.651 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.651 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.651 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.651 CC module/bdev/aio/bdev_aio.o 00:02:29.651 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.651 CC module/bdev/malloc/bdev_malloc.o 00:02:29.651 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.651 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.651 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.651 CC module/bdev/nvme/bdev_nvme.o 00:02:29.651 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.651 CC module/bdev/nvme/nvme_rpc.o 00:02:29.651 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.651 CC module/bdev/nvme/vbdev_opal.o 00:02:29.651 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.651 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.651 CC module/bdev/raid/bdev_raid.o 00:02:29.651 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.651 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.651 CC module/bdev/raid/raid0.o 00:02:29.651 CC module/bdev/raid/raid1.o 00:02:29.651 CC module/bdev/raid/concat.o 00:02:29.910 LIB libspdk_blobfs_bdev.a 00:02:29.910 SO libspdk_blobfs_bdev.so.6.0 00:02:29.910 LIB libspdk_bdev_split.a 00:02:29.910 SYMLINK libspdk_blobfs_bdev.so 00:02:29.910 LIB libspdk_bdev_null.a 00:02:29.910 LIB libspdk_bdev_error.a 00:02:29.910 SO libspdk_bdev_split.so.6.0 00:02:29.910 LIB libspdk_bdev_passthru.a 00:02:29.910 SO libspdk_bdev_null.so.6.0 00:02:29.910 LIB libspdk_bdev_gpt.a 00:02:29.910 LIB libspdk_bdev_ftl.a 00:02:29.910 SO libspdk_bdev_error.so.6.0 00:02:29.910 LIB libspdk_bdev_zone_block.a 00:02:29.910 LIB libspdk_bdev_aio.a 00:02:29.910 SO libspdk_bdev_passthru.so.6.0 00:02:29.910 LIB libspdk_bdev_malloc.a 00:02:29.910 SO libspdk_bdev_gpt.so.6.0 00:02:29.910 SO libspdk_bdev_ftl.so.6.0 00:02:29.910 SO libspdk_bdev_zone_block.so.6.0 00:02:29.910 SO libspdk_bdev_aio.so.6.0 00:02:29.910 SYMLINK libspdk_bdev_split.so 00:02:29.910 SYMLINK libspdk_bdev_null.so 00:02:29.910 LIB libspdk_bdev_delay.a 00:02:29.910 SYMLINK libspdk_bdev_error.so 00:02:30.169 LIB libspdk_bdev_iscsi.a 00:02:30.169 SO libspdk_bdev_malloc.so.6.0 00:02:30.169 SYMLINK libspdk_bdev_passthru.so 00:02:30.169 SO libspdk_bdev_delay.so.6.0 00:02:30.169 SYMLINK libspdk_bdev_gpt.so 00:02:30.169 SYMLINK libspdk_bdev_zone_block.so 00:02:30.169 SYMLINK libspdk_bdev_aio.so 00:02:30.169 SO libspdk_bdev_iscsi.so.6.0 00:02:30.169 SYMLINK libspdk_bdev_ftl.so 00:02:30.169 LIB libspdk_bdev_lvol.a 00:02:30.169 SYMLINK libspdk_bdev_malloc.so 00:02:30.169 LIB libspdk_bdev_virtio.a 00:02:30.169 SYMLINK libspdk_bdev_delay.so 00:02:30.169 SO libspdk_bdev_lvol.so.6.0 00:02:30.169 SYMLINK libspdk_bdev_iscsi.so 00:02:30.169 SO libspdk_bdev_virtio.so.6.0 00:02:30.169 SYMLINK libspdk_bdev_lvol.so 00:02:30.169 SYMLINK libspdk_bdev_virtio.so 00:02:30.428 LIB libspdk_bdev_raid.a 00:02:30.428 SO libspdk_bdev_raid.so.6.0 00:02:30.687 SYMLINK libspdk_bdev_raid.so 00:02:31.630 LIB libspdk_bdev_nvme.a 00:02:31.630 SO libspdk_bdev_nvme.so.7.1 00:02:31.630 SYMLINK libspdk_bdev_nvme.so 00:02:32.199 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.199 CC module/event/subsystems/sock/sock.o 00:02:32.199 CC module/event/subsystems/vmd/vmd.o 00:02:32.199 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.199 CC module/event/subsystems/keyring/keyring.o 00:02:32.199 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.199 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.199 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.199 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.199 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:32.459 LIB libspdk_event_vmd.a 00:02:32.459 LIB libspdk_event_sock.a 00:02:32.459 LIB libspdk_event_vhost_blk.a 00:02:32.459 LIB libspdk_event_keyring.a 00:02:32.459 LIB libspdk_event_scheduler.a 00:02:32.459 LIB libspdk_event_fsdev.a 00:02:32.459 SO libspdk_event_vmd.so.6.0 00:02:32.460 LIB libspdk_event_vfu_tgt.a 00:02:32.460 LIB libspdk_event_iobuf.a 00:02:32.460 SO libspdk_event_sock.so.5.0 00:02:32.460 SO libspdk_event_vhost_blk.so.3.0 00:02:32.460 SO libspdk_event_keyring.so.1.0 00:02:32.460 SO libspdk_event_scheduler.so.4.0 00:02:32.460 SO libspdk_event_fsdev.so.1.0 00:02:32.460 SO libspdk_event_iobuf.so.3.0 00:02:32.460 SO libspdk_event_vfu_tgt.so.3.0 00:02:32.460 SYMLINK libspdk_event_vmd.so 00:02:32.460 SYMLINK libspdk_event_keyring.so 00:02:32.460 SYMLINK libspdk_event_sock.so 00:02:32.460 SYMLINK libspdk_event_vhost_blk.so 00:02:32.460 SYMLINK libspdk_event_scheduler.so 00:02:32.460 SYMLINK libspdk_event_fsdev.so 00:02:32.460 SYMLINK libspdk_event_iobuf.so 00:02:32.460 SYMLINK libspdk_event_vfu_tgt.so 00:02:32.719 CC module/event/subsystems/accel/accel.o 00:02:32.978 LIB libspdk_event_accel.a 00:02:32.978 SO libspdk_event_accel.so.6.0 00:02:32.978 SYMLINK libspdk_event_accel.so 00:02:33.546 CC module/event/subsystems/bdev/bdev.o 00:02:33.546 LIB libspdk_event_bdev.a 00:02:33.546 SO libspdk_event_bdev.so.6.0 00:02:33.546 SYMLINK libspdk_event_bdev.so 00:02:34.114 CC module/event/subsystems/scsi/scsi.o 00:02:34.114 CC module/event/subsystems/nbd/nbd.o 00:02:34.114 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.115 CC module/event/subsystems/ublk/ublk.o 00:02:34.115 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.115 LIB libspdk_event_nbd.a 00:02:34.115 LIB libspdk_event_ublk.a 00:02:34.115 LIB libspdk_event_scsi.a 00:02:34.115 SO libspdk_event_nbd.so.6.0 00:02:34.115 SO libspdk_event_ublk.so.3.0 00:02:34.115 SO libspdk_event_scsi.so.6.0 00:02:34.115 LIB libspdk_event_nvmf.a 00:02:34.115 SYMLINK libspdk_event_nbd.so 00:02:34.115 SYMLINK libspdk_event_ublk.so 00:02:34.115 SYMLINK libspdk_event_scsi.so 00:02:34.115 SO libspdk_event_nvmf.so.6.0 00:02:34.374 SYMLINK libspdk_event_nvmf.so 00:02:34.374 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.374 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.633 LIB libspdk_event_vhost_scsi.a 00:02:34.633 LIB libspdk_event_iscsi.a 00:02:34.633 SO libspdk_event_vhost_scsi.so.3.0 00:02:34.633 SO libspdk_event_iscsi.so.6.0 00:02:34.633 SYMLINK libspdk_event_vhost_scsi.so 00:02:34.633 SYMLINK libspdk_event_iscsi.so 00:02:34.892 SO libspdk.so.6.0 00:02:34.892 SYMLINK libspdk.so 00:02:35.151 CXX app/trace/trace.o 00:02:35.151 CC app/spdk_lspci/spdk_lspci.o 00:02:35.151 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.151 CC app/spdk_nvme_identify/identify.o 00:02:35.422 CC app/spdk_nvme_perf/perf.o 00:02:35.422 CC app/trace_record/trace_record.o 00:02:35.422 CC test/rpc_client/rpc_client_test.o 00:02:35.422 TEST_HEADER include/spdk/accel.h 00:02:35.422 TEST_HEADER include/spdk/accel_module.h 00:02:35.422 TEST_HEADER include/spdk/assert.h 00:02:35.422 TEST_HEADER include/spdk/barrier.h 00:02:35.422 CC app/spdk_top/spdk_top.o 00:02:35.422 TEST_HEADER include/spdk/bdev.h 00:02:35.422 TEST_HEADER include/spdk/base64.h 00:02:35.422 TEST_HEADER include/spdk/bdev_module.h 00:02:35.422 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.422 TEST_HEADER include/spdk/bit_pool.h 00:02:35.422 TEST_HEADER include/spdk/bit_array.h 00:02:35.422 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.422 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.422 TEST_HEADER include/spdk/blobfs.h 00:02:35.422 TEST_HEADER include/spdk/blob.h 00:02:35.422 TEST_HEADER include/spdk/conf.h 00:02:35.422 TEST_HEADER include/spdk/config.h 00:02:35.422 TEST_HEADER include/spdk/cpuset.h 00:02:35.422 TEST_HEADER include/spdk/crc32.h 00:02:35.422 TEST_HEADER include/spdk/crc16.h 00:02:35.422 TEST_HEADER include/spdk/dif.h 00:02:35.422 TEST_HEADER include/spdk/crc64.h 00:02:35.422 TEST_HEADER include/spdk/dma.h 00:02:35.422 TEST_HEADER include/spdk/endian.h 00:02:35.422 TEST_HEADER include/spdk/env.h 00:02:35.423 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.423 TEST_HEADER include/spdk/event.h 00:02:35.423 TEST_HEADER include/spdk/fd_group.h 00:02:35.423 TEST_HEADER include/spdk/file.h 00:02:35.423 TEST_HEADER include/spdk/fd.h 00:02:35.423 TEST_HEADER include/spdk/fsdev.h 00:02:35.423 TEST_HEADER include/spdk/fsdev_module.h 00:02:35.423 TEST_HEADER include/spdk/ftl.h 00:02:35.423 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:35.423 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.423 TEST_HEADER include/spdk/hexlify.h 00:02:35.423 TEST_HEADER include/spdk/histogram_data.h 00:02:35.423 TEST_HEADER include/spdk/idxd.h 00:02:35.423 TEST_HEADER include/spdk/init.h 00:02:35.423 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.423 TEST_HEADER include/spdk/ioat.h 00:02:35.423 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.423 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.423 TEST_HEADER include/spdk/json.h 00:02:35.423 CC app/spdk_dd/spdk_dd.o 00:02:35.423 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.423 TEST_HEADER include/spdk/keyring.h 00:02:35.423 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.423 TEST_HEADER include/spdk/likely.h 00:02:35.423 TEST_HEADER include/spdk/keyring_module.h 00:02:35.423 TEST_HEADER include/spdk/log.h 00:02:35.423 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.423 TEST_HEADER include/spdk/md5.h 00:02:35.423 TEST_HEADER include/spdk/lvol.h 00:02:35.423 TEST_HEADER include/spdk/memory.h 00:02:35.423 TEST_HEADER include/spdk/net.h 00:02:35.423 TEST_HEADER include/spdk/nbd.h 00:02:35.423 TEST_HEADER include/spdk/mmio.h 00:02:35.423 TEST_HEADER include/spdk/notify.h 00:02:35.423 TEST_HEADER include/spdk/nvme.h 00:02:35.423 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.423 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.423 CC app/nvmf_tgt/nvmf_main.o 00:02:35.423 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.423 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.423 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.423 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.423 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.423 TEST_HEADER include/spdk/nvmf.h 00:02:35.423 TEST_HEADER include/spdk/opal.h 00:02:35.423 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.423 TEST_HEADER include/spdk/opal_spec.h 00:02:35.423 TEST_HEADER include/spdk/pci_ids.h 00:02:35.423 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.423 TEST_HEADER include/spdk/pipe.h 00:02:35.423 TEST_HEADER include/spdk/queue.h 00:02:35.423 TEST_HEADER include/spdk/reduce.h 00:02:35.423 TEST_HEADER include/spdk/rpc.h 00:02:35.423 TEST_HEADER include/spdk/scheduler.h 00:02:35.423 TEST_HEADER include/spdk/scsi.h 00:02:35.423 TEST_HEADER include/spdk/sock.h 00:02:35.423 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.423 TEST_HEADER include/spdk/stdinc.h 00:02:35.423 TEST_HEADER include/spdk/string.h 00:02:35.423 TEST_HEADER include/spdk/trace.h 00:02:35.423 TEST_HEADER include/spdk/thread.h 00:02:35.423 TEST_HEADER include/spdk/trace_parser.h 00:02:35.423 TEST_HEADER include/spdk/tree.h 00:02:35.423 TEST_HEADER include/spdk/ublk.h 00:02:35.423 TEST_HEADER include/spdk/util.h 00:02:35.423 TEST_HEADER include/spdk/version.h 00:02:35.423 TEST_HEADER include/spdk/uuid.h 00:02:35.423 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.423 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.423 TEST_HEADER include/spdk/vhost.h 00:02:35.423 TEST_HEADER include/spdk/vmd.h 00:02:35.423 CC app/spdk_tgt/spdk_tgt.o 00:02:35.423 CXX test/cpp_headers/accel.o 00:02:35.423 TEST_HEADER include/spdk/xor.h 00:02:35.423 TEST_HEADER include/spdk/zipf.h 00:02:35.423 CXX test/cpp_headers/accel_module.o 00:02:35.423 CXX test/cpp_headers/base64.o 00:02:35.423 CXX test/cpp_headers/assert.o 00:02:35.423 CXX test/cpp_headers/bdev.o 00:02:35.423 CXX test/cpp_headers/bdev_module.o 00:02:35.423 CXX test/cpp_headers/barrier.o 00:02:35.423 CXX test/cpp_headers/bit_array.o 00:02:35.423 CXX test/cpp_headers/bdev_zone.o 00:02:35.423 CXX test/cpp_headers/blobfs.o 00:02:35.423 CXX test/cpp_headers/bit_pool.o 00:02:35.423 CXX test/cpp_headers/blob.o 00:02:35.423 CXX test/cpp_headers/blob_bdev.o 00:02:35.423 CXX test/cpp_headers/blobfs_bdev.o 00:02:35.423 CXX test/cpp_headers/config.o 00:02:35.423 CXX test/cpp_headers/conf.o 00:02:35.423 CXX test/cpp_headers/cpuset.o 00:02:35.423 CXX test/cpp_headers/crc32.o 00:02:35.423 CXX test/cpp_headers/crc16.o 00:02:35.423 CXX test/cpp_headers/dif.o 00:02:35.423 CXX test/cpp_headers/dma.o 00:02:35.423 CXX test/cpp_headers/endian.o 00:02:35.423 CXX test/cpp_headers/crc64.o 00:02:35.423 CXX test/cpp_headers/env_dpdk.o 00:02:35.423 CXX test/cpp_headers/event.o 00:02:35.423 CXX test/cpp_headers/env.o 00:02:35.423 CXX test/cpp_headers/fd_group.o 00:02:35.423 CXX test/cpp_headers/file.o 00:02:35.423 CXX test/cpp_headers/fd.o 00:02:35.423 CXX test/cpp_headers/fsdev.o 00:02:35.423 CXX test/cpp_headers/ftl.o 00:02:35.423 CXX test/cpp_headers/fsdev_module.o 00:02:35.423 CXX test/cpp_headers/hexlify.o 00:02:35.423 CXX test/cpp_headers/fuse_dispatcher.o 00:02:35.423 CXX test/cpp_headers/gpt_spec.o 00:02:35.423 CXX test/cpp_headers/idxd.o 00:02:35.423 CXX test/cpp_headers/histogram_data.o 00:02:35.423 CXX test/cpp_headers/idxd_spec.o 00:02:35.423 CXX test/cpp_headers/init.o 00:02:35.423 CXX test/cpp_headers/ioat.o 00:02:35.423 CXX test/cpp_headers/iscsi_spec.o 00:02:35.423 CXX test/cpp_headers/ioat_spec.o 00:02:35.423 CXX test/cpp_headers/jsonrpc.o 00:02:35.423 CXX test/cpp_headers/json.o 00:02:35.423 CXX test/cpp_headers/keyring.o 00:02:35.423 CXX test/cpp_headers/keyring_module.o 00:02:35.423 CXX test/cpp_headers/lvol.o 00:02:35.423 CXX test/cpp_headers/likely.o 00:02:35.423 CXX test/cpp_headers/log.o 00:02:35.423 CXX test/cpp_headers/md5.o 00:02:35.423 CXX test/cpp_headers/mmio.o 00:02:35.423 CXX test/cpp_headers/memory.o 00:02:35.423 CXX test/cpp_headers/nbd.o 00:02:35.423 CXX test/cpp_headers/net.o 00:02:35.423 CXX test/cpp_headers/notify.o 00:02:35.423 CXX test/cpp_headers/nvme.o 00:02:35.423 CXX test/cpp_headers/nvme_intel.o 00:02:35.423 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.423 CXX test/cpp_headers/nvme_spec.o 00:02:35.423 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.423 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.423 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.423 CXX test/cpp_headers/nvme_zns.o 00:02:35.423 CXX test/cpp_headers/nvmf.o 00:02:35.423 CXX test/cpp_headers/nvmf_transport.o 00:02:35.423 CXX test/cpp_headers/nvmf_spec.o 00:02:35.423 CXX test/cpp_headers/opal.o 00:02:35.423 CC test/env/pci/pci_ut.o 00:02:35.423 CC test/app/stub/stub.o 00:02:35.423 CC test/env/memory/memory_ut.o 00:02:35.423 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:35.423 CC test/app/jsoncat/jsoncat.o 00:02:35.423 CC test/env/vtophys/vtophys.o 00:02:35.423 CC test/thread/poller_perf/poller_perf.o 00:02:35.423 CC examples/ioat/verify/verify.o 00:02:35.423 LINK spdk_lspci 00:02:35.423 CC test/app/histogram_perf/histogram_perf.o 00:02:35.423 CC examples/util/zipf/zipf.o 00:02:35.424 CC examples/ioat/perf/perf.o 00:02:35.424 CC app/fio/nvme/fio_plugin.o 00:02:35.697 CC test/dma/test_dma/test_dma.o 00:02:35.697 CC app/fio/bdev/fio_plugin.o 00:02:35.697 CC test/app/bdev_svc/bdev_svc.o 00:02:35.962 LINK interrupt_tgt 00:02:35.962 CC test/env/mem_callbacks/mem_callbacks.o 00:02:35.962 LINK spdk_nvme_discover 00:02:35.962 LINK iscsi_tgt 00:02:35.962 LINK rpc_client_test 00:02:35.962 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:35.962 LINK poller_perf 00:02:35.962 LINK vtophys 00:02:35.962 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:35.962 LINK nvmf_tgt 00:02:35.962 CXX test/cpp_headers/opal_spec.o 00:02:35.962 LINK env_dpdk_post_init 00:02:35.962 CXX test/cpp_headers/pci_ids.o 00:02:35.962 LINK stub 00:02:35.962 CXX test/cpp_headers/pipe.o 00:02:35.962 CXX test/cpp_headers/queue.o 00:02:35.962 CXX test/cpp_headers/reduce.o 00:02:35.962 CXX test/cpp_headers/rpc.o 00:02:35.962 CXX test/cpp_headers/scheduler.o 00:02:35.962 CXX test/cpp_headers/scsi.o 00:02:35.962 CXX test/cpp_headers/scsi_spec.o 00:02:35.962 CXX test/cpp_headers/sock.o 00:02:35.962 CXX test/cpp_headers/stdinc.o 00:02:36.222 CXX test/cpp_headers/string.o 00:02:36.222 LINK spdk_trace_record 00:02:36.222 CXX test/cpp_headers/thread.o 00:02:36.222 CXX test/cpp_headers/trace.o 00:02:36.222 CXX test/cpp_headers/trace_parser.o 00:02:36.222 CXX test/cpp_headers/tree.o 00:02:36.222 CXX test/cpp_headers/ublk.o 00:02:36.222 CXX test/cpp_headers/util.o 00:02:36.222 CXX test/cpp_headers/uuid.o 00:02:36.222 CXX test/cpp_headers/version.o 00:02:36.223 CXX test/cpp_headers/vfio_user_pci.o 00:02:36.223 CXX test/cpp_headers/vfio_user_spec.o 00:02:36.223 CXX test/cpp_headers/vhost.o 00:02:36.223 CXX test/cpp_headers/vmd.o 00:02:36.223 CXX test/cpp_headers/xor.o 00:02:36.223 LINK jsoncat 00:02:36.223 CXX test/cpp_headers/zipf.o 00:02:36.223 LINK bdev_svc 00:02:36.223 LINK zipf 00:02:36.223 LINK histogram_perf 00:02:36.223 LINK verify 00:02:36.223 LINK spdk_dd 00:02:36.223 LINK spdk_tgt 00:02:36.223 LINK ioat_perf 00:02:36.223 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.223 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.223 LINK pci_ut 00:02:36.480 LINK spdk_trace 00:02:36.480 LINK spdk_bdev 00:02:36.480 CC test/event/reactor_perf/reactor_perf.o 00:02:36.480 CC test/event/reactor/reactor.o 00:02:36.480 CC test/event/event_perf/event_perf.o 00:02:36.480 LINK test_dma 00:02:36.480 LINK spdk_nvme 00:02:36.480 CC test/event/app_repeat/app_repeat.o 00:02:36.480 CC test/event/scheduler/scheduler.o 00:02:36.480 CC examples/vmd/led/led.o 00:02:36.480 CC examples/vmd/lsvmd/lsvmd.o 00:02:36.739 CC examples/sock/hello_world/hello_sock.o 00:02:36.739 LINK nvme_fuzz 00:02:36.739 CC examples/idxd/perf/perf.o 00:02:36.739 CC examples/thread/thread/thread_ex.o 00:02:36.739 LINK spdk_nvme_perf 00:02:36.739 LINK spdk_nvme_identify 00:02:36.739 LINK vhost_fuzz 00:02:36.739 LINK reactor_perf 00:02:36.739 LINK mem_callbacks 00:02:36.739 LINK reactor 00:02:36.739 LINK event_perf 00:02:36.739 LINK spdk_top 00:02:36.739 LINK led 00:02:36.739 LINK app_repeat 00:02:36.739 CC app/vhost/vhost.o 00:02:36.739 LINK lsvmd 00:02:36.739 LINK hello_sock 00:02:36.739 LINK scheduler 00:02:36.999 LINK thread 00:02:36.999 LINK idxd_perf 00:02:36.999 LINK vhost 00:02:36.999 LINK memory_ut 00:02:36.999 CC test/nvme/overhead/overhead.o 00:02:36.999 CC test/nvme/reserve/reserve.o 00:02:36.999 CC test/nvme/err_injection/err_injection.o 00:02:36.999 CC test/nvme/simple_copy/simple_copy.o 00:02:36.999 CC test/nvme/reset/reset.o 00:02:36.999 CC test/nvme/aer/aer.o 00:02:36.999 CC test/nvme/fdp/fdp.o 00:02:36.999 CC test/nvme/e2edp/nvme_dp.o 00:02:36.999 CC test/nvme/sgl/sgl.o 00:02:36.999 CC test/nvme/fused_ordering/fused_ordering.o 00:02:36.999 CC test/nvme/compliance/nvme_compliance.o 00:02:36.999 CC test/nvme/boot_partition/boot_partition.o 00:02:36.999 CC test/nvme/connect_stress/connect_stress.o 00:02:36.999 CC test/nvme/cuse/cuse.o 00:02:36.999 CC test/nvme/startup/startup.o 00:02:36.999 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:36.999 CC test/accel/dif/dif.o 00:02:36.999 CC test/blobfs/mkfs/mkfs.o 00:02:37.257 CC test/lvol/esnap/esnap.o 00:02:37.257 LINK reserve 00:02:37.257 LINK boot_partition 00:02:37.257 LINK err_injection 00:02:37.257 LINK startup 00:02:37.257 LINK fused_ordering 00:02:37.257 LINK connect_stress 00:02:37.257 CC examples/nvme/hello_world/hello_world.o 00:02:37.257 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.257 LINK doorbell_aers 00:02:37.257 LINK simple_copy 00:02:37.258 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.258 CC examples/nvme/reconnect/reconnect.o 00:02:37.258 CC examples/nvme/arbitration/arbitration.o 00:02:37.258 CC examples/nvme/hotplug/hotplug.o 00:02:37.258 CC examples/nvme/abort/abort.o 00:02:37.258 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.258 LINK mkfs 00:02:37.258 LINK sgl 00:02:37.258 LINK reset 00:02:37.258 LINK overhead 00:02:37.258 LINK nvme_dp 00:02:37.258 LINK aer 00:02:37.258 LINK nvme_compliance 00:02:37.258 LINK fdp 00:02:37.515 CC examples/accel/perf/accel_perf.o 00:02:37.515 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:37.515 CC examples/blob/hello_world/hello_blob.o 00:02:37.515 CC examples/blob/cli/blobcli.o 00:02:37.515 LINK cmb_copy 00:02:37.515 LINK pmr_persistence 00:02:37.515 LINK hotplug 00:02:37.515 LINK hello_world 00:02:37.515 LINK arbitration 00:02:37.515 LINK reconnect 00:02:37.515 LINK iscsi_fuzz 00:02:37.515 LINK abort 00:02:37.781 LINK dif 00:02:37.781 LINK hello_blob 00:02:37.781 LINK hello_fsdev 00:02:37.781 LINK nvme_manage 00:02:37.781 LINK accel_perf 00:02:37.781 LINK blobcli 00:02:38.057 LINK cuse 00:02:38.342 CC test/bdev/bdevio/bdevio.o 00:02:38.343 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.343 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.622 LINK hello_bdev 00:02:38.622 LINK bdevio 00:02:38.893 LINK bdevperf 00:02:39.459 CC examples/nvmf/nvmf/nvmf.o 00:02:39.717 LINK nvmf 00:02:40.652 LINK esnap 00:02:40.911 00:02:40.911 real 0m55.682s 00:02:40.911 user 8m17.095s 00:02:40.911 sys 3m46.770s 00:02:40.911 16:56:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.911 16:56:58 make -- common/autotest_common.sh@10 -- $ set +x 00:02:40.911 ************************************ 00:02:40.911 END TEST make 00:02:40.911 ************************************ 00:02:40.911 16:56:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:40.911 16:56:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:40.911 16:56:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:40.911 16:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.911 16:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:40.911 16:56:58 -- pm/common@44 -- $ pid=2218222 00:02:40.911 16:56:58 -- pm/common@50 -- $ kill -TERM 2218222 00:02:40.911 16:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.911 16:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:40.911 16:56:58 -- pm/common@44 -- $ pid=2218224 00:02:40.911 16:56:58 -- pm/common@50 -- $ kill -TERM 2218224 00:02:40.911 16:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.911 16:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:40.911 16:56:58 -- pm/common@44 -- $ pid=2218225 00:02:40.911 16:56:58 -- pm/common@50 -- $ kill -TERM 2218225 00:02:40.911 16:56:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.911 16:56:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:40.911 16:56:58 -- pm/common@44 -- $ pid=2218248 00:02:40.911 16:56:58 -- pm/common@50 -- $ sudo -E kill -TERM 2218248 00:02:40.911 16:56:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:40.911 16:56:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:41.170 16:56:59 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:41.170 16:56:59 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:41.170 16:56:59 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:41.170 16:56:59 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:41.170 16:56:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:41.170 16:56:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:41.170 16:56:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:41.170 16:56:59 -- scripts/common.sh@336 -- # IFS=.-: 00:02:41.170 16:56:59 -- scripts/common.sh@336 -- # read -ra ver1 00:02:41.170 16:56:59 -- scripts/common.sh@337 -- # IFS=.-: 00:02:41.171 16:56:59 -- scripts/common.sh@337 -- # read -ra ver2 00:02:41.171 16:56:59 -- scripts/common.sh@338 -- # local 'op=<' 00:02:41.171 16:56:59 -- scripts/common.sh@340 -- # ver1_l=2 00:02:41.171 16:56:59 -- scripts/common.sh@341 -- # ver2_l=1 00:02:41.171 16:56:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:41.171 16:56:59 -- scripts/common.sh@344 -- # case "$op" in 00:02:41.171 16:56:59 -- scripts/common.sh@345 -- # : 1 00:02:41.171 16:56:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:41.171 16:56:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:41.171 16:56:59 -- scripts/common.sh@365 -- # decimal 1 00:02:41.171 16:56:59 -- scripts/common.sh@353 -- # local d=1 00:02:41.171 16:56:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:41.171 16:56:59 -- scripts/common.sh@355 -- # echo 1 00:02:41.171 16:56:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:41.171 16:56:59 -- scripts/common.sh@366 -- # decimal 2 00:02:41.171 16:56:59 -- scripts/common.sh@353 -- # local d=2 00:02:41.171 16:56:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:41.171 16:56:59 -- scripts/common.sh@355 -- # echo 2 00:02:41.171 16:56:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:41.171 16:56:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:41.171 16:56:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:41.171 16:56:59 -- scripts/common.sh@368 -- # return 0 00:02:41.171 16:56:59 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:41.171 16:56:59 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:41.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.171 --rc genhtml_branch_coverage=1 00:02:41.171 --rc genhtml_function_coverage=1 00:02:41.171 --rc genhtml_legend=1 00:02:41.171 --rc geninfo_all_blocks=1 00:02:41.171 --rc geninfo_unexecuted_blocks=1 00:02:41.171 00:02:41.171 ' 00:02:41.171 16:56:59 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:41.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.171 --rc genhtml_branch_coverage=1 00:02:41.171 --rc genhtml_function_coverage=1 00:02:41.171 --rc genhtml_legend=1 00:02:41.171 --rc geninfo_all_blocks=1 00:02:41.171 --rc geninfo_unexecuted_blocks=1 00:02:41.171 00:02:41.171 ' 00:02:41.171 16:56:59 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:41.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.171 --rc genhtml_branch_coverage=1 00:02:41.171 --rc genhtml_function_coverage=1 00:02:41.171 --rc genhtml_legend=1 00:02:41.171 --rc geninfo_all_blocks=1 00:02:41.171 --rc geninfo_unexecuted_blocks=1 00:02:41.171 00:02:41.171 ' 00:02:41.171 16:56:59 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:41.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:41.171 --rc genhtml_branch_coverage=1 00:02:41.171 --rc genhtml_function_coverage=1 00:02:41.171 --rc genhtml_legend=1 00:02:41.171 --rc geninfo_all_blocks=1 00:02:41.171 --rc geninfo_unexecuted_blocks=1 00:02:41.171 00:02:41.171 ' 00:02:41.171 16:56:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.171 16:56:59 -- nvmf/common.sh@7 -- # uname -s 00:02:41.171 16:56:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.171 16:56:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.171 16:56:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.171 16:56:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.171 16:56:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.171 16:56:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.171 16:56:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.171 16:56:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.171 16:56:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.171 16:56:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.171 16:56:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.171 16:56:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:41.171 16:56:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.171 16:56:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.171 16:56:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.171 16:56:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.171 16:56:59 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:41.171 16:56:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:41.171 16:56:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.171 16:56:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.171 16:56:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.171 16:56:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.171 16:56:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.171 16:56:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.171 16:56:59 -- paths/export.sh@5 -- # export PATH 00:02:41.171 16:56:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.171 16:56:59 -- nvmf/common.sh@51 -- # : 0 00:02:41.171 16:56:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:41.171 16:56:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:41.171 16:56:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.171 16:56:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.171 16:56:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.171 16:56:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:41.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:41.171 16:56:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:41.171 16:56:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:41.171 16:56:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:41.171 16:56:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.171 16:56:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.171 16:56:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.171 16:56:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.171 16:56:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.171 16:56:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.171 16:56:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.171 16:56:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.171 16:56:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.171 16:56:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.171 16:56:59 -- spdk/autotest.sh@48 -- # udevadm_pid=2280497 00:02:41.171 16:56:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.171 16:56:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.171 16:56:59 -- pm/common@17 -- # local monitor 00:02:41.171 16:56:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.171 16:56:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.171 16:56:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.171 16:56:59 -- pm/common@21 -- # date +%s 00:02:41.171 16:56:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.171 16:56:59 -- pm/common@21 -- # date +%s 00:02:41.171 16:56:59 -- pm/common@25 -- # sleep 1 00:02:41.171 16:56:59 -- pm/common@21 -- # date +%s 00:02:41.171 16:56:59 -- pm/common@21 -- # date +%s 00:02:41.171 16:56:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732118219 00:02:41.171 16:56:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732118219 00:02:41.171 16:56:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732118219 00:02:41.171 16:56:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732118219 00:02:41.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732118219_collect-vmstat.pm.log 00:02:41.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732118219_collect-cpu-load.pm.log 00:02:41.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732118219_collect-cpu-temp.pm.log 00:02:41.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732118219_collect-bmc-pm.bmc.pm.log 00:02:42.364 16:57:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.364 16:57:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.364 16:57:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:42.364 16:57:00 -- common/autotest_common.sh@10 -- # set +x 00:02:42.364 16:57:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.364 16:57:00 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:42.364 16:57:00 -- common/autotest_common.sh@10 -- # set +x 00:02:42.364 16:57:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.364 16:57:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.364 16:57:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.364 16:57:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.364 16:57:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.364 16:57:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.364 16:57:00 -- common/autotest_common.sh@1457 -- # uname 00:02:42.364 16:57:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:42.364 16:57:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.364 16:57:00 -- common/autotest_common.sh@1477 -- # uname 00:02:42.364 16:57:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:42.364 16:57:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:42.364 16:57:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:42.364 lcov: LCOV version 1.15 00:02:42.364 16:57:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:57.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:57.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:09.450 16:57:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:09.450 16:57:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:09.451 16:57:25 -- common/autotest_common.sh@10 -- # set +x 00:03:09.451 16:57:25 -- spdk/autotest.sh@78 -- # rm -f 00:03:09.451 16:57:25 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.388 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:10.388 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:10.388 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.646 16:57:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:10.646 16:57:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:10.646 16:57:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:10.646 16:57:28 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:10.646 16:57:28 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:10.647 16:57:28 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:10.647 16:57:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:10.647 16:57:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.647 16:57:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:10.647 16:57:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:10.647 16:57:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:10.647 16:57:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:10.647 16:57:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:10.647 16:57:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:10.647 16:57:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.647 No valid GPT data, bailing 00:03:10.647 16:57:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.647 16:57:28 -- scripts/common.sh@394 -- # pt= 00:03:10.647 16:57:28 -- scripts/common.sh@395 -- # return 1 00:03:10.647 16:57:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.647 1+0 records in 00:03:10.647 1+0 records out 00:03:10.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493599 s, 212 MB/s 00:03:10.647 16:57:28 -- spdk/autotest.sh@105 -- # sync 00:03:10.647 16:57:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.647 16:57:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.647 16:57:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.221 16:57:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:17.221 16:57:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:17.221 16:57:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:17.221 16:57:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.125 Hugepages 00:03:19.125 node hugesize free / total 00:03:19.125 node0 1048576kB 0 / 0 00:03:19.125 node0 2048kB 0 / 0 00:03:19.125 node1 1048576kB 0 / 0 00:03:19.125 node1 2048kB 0 / 0 00:03:19.125 00:03:19.125 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.125 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.125 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.125 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:19.125 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.125 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.125 16:57:37 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.125 16:57:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.125 16:57:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.125 16:57:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.412 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.412 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.789 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.789 16:57:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:24.725 16:57:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:24.725 16:57:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:24.725 16:57:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.725 16:57:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:24.725 16:57:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:24.725 16:57:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:24.725 16:57:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.726 16:57:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.726 16:57:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:24.726 16:57:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:24.726 16:57:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:24.726 16:57:42 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.014 Waiting for block devices as requested 00:03:28.014 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:28.014 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:28.014 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:28.014 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:28.015 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.015 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.015 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.015 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:28.273 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:28.273 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:28.273 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:28.532 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:28.533 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:28.533 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:28.791 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:28.791 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:28.791 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:29.050 16:57:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:29.050 16:57:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:29.050 16:57:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:29.050 16:57:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:29.050 16:57:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:29.051 16:57:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:29.051 16:57:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:29.051 16:57:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:29.051 16:57:46 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:29.051 16:57:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:29.051 16:57:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:29.051 16:57:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:29.051 16:57:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:29.051 16:57:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:29.051 16:57:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:29.051 16:57:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:29.051 16:57:46 -- common/autotest_common.sh@1543 -- # continue 00:03:29.051 16:57:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:29.051 16:57:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.051 16:57:46 -- common/autotest_common.sh@10 -- # set +x 00:03:29.051 16:57:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:29.051 16:57:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.051 16:57:46 -- common/autotest_common.sh@10 -- # set +x 00:03:29.051 16:57:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.341 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.341 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.276 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.534 16:57:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:33.534 16:57:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:33.534 16:57:51 -- common/autotest_common.sh@10 -- # set +x 00:03:33.534 16:57:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:33.534 16:57:51 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:33.534 16:57:51 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:33.534 16:57:51 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:33.534 16:57:51 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:33.534 16:57:51 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:33.534 16:57:51 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:33.534 16:57:51 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:33.534 16:57:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:33.534 16:57:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:33.534 16:57:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.534 16:57:51 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.534 16:57:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.534 16:57:51 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:33.534 16:57:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:33.534 16:57:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:33.534 16:57:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:33.534 16:57:51 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:33.534 16:57:51 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:33.534 16:57:51 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:33.534 16:57:51 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:33.534 16:57:51 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:33.534 16:57:51 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:33.534 16:57:51 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2295453 00:03:33.534 16:57:51 -- common/autotest_common.sh@1585 -- # waitforlisten 2295453 00:03:33.534 16:57:51 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:33.534 16:57:51 -- common/autotest_common.sh@835 -- # '[' -z 2295453 ']' 00:03:33.534 16:57:51 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.534 16:57:51 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.534 16:57:51 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.534 16:57:51 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.534 16:57:51 -- common/autotest_common.sh@10 -- # set +x 00:03:33.792 [2024-11-20 16:57:51.599750] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:03:33.792 [2024-11-20 16:57:51.599801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295453 ] 00:03:33.792 [2024-11-20 16:57:51.674657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.792 [2024-11-20 16:57:51.719874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.050 16:57:51 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.050 16:57:51 -- common/autotest_common.sh@868 -- # return 0 00:03:34.050 16:57:51 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:34.050 16:57:51 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:34.050 16:57:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:37.336 nvme0n1 00:03:37.336 16:57:54 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:37.336 [2024-11-20 16:57:55.129181] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:37.336 request: 00:03:37.336 { 00:03:37.336 "nvme_ctrlr_name": "nvme0", 00:03:37.336 "password": "test", 00:03:37.336 "method": "bdev_nvme_opal_revert", 00:03:37.336 "req_id": 1 00:03:37.336 } 00:03:37.336 Got JSON-RPC error response 00:03:37.336 response: 00:03:37.336 { 00:03:37.336 "code": -32602, 00:03:37.336 "message": "Invalid parameters" 00:03:37.336 } 00:03:37.336 16:57:55 -- common/autotest_common.sh@1591 -- # true 00:03:37.336 16:57:55 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:37.336 16:57:55 -- common/autotest_common.sh@1595 -- # killprocess 2295453 00:03:37.336 16:57:55 -- common/autotest_common.sh@954 -- # '[' -z 2295453 ']' 00:03:37.336 16:57:55 -- common/autotest_common.sh@958 -- # kill -0 2295453 00:03:37.336 16:57:55 -- common/autotest_common.sh@959 -- # uname 00:03:37.337 16:57:55 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:37.337 16:57:55 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2295453 00:03:37.337 16:57:55 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:37.337 16:57:55 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:37.337 16:57:55 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2295453' 00:03:37.337 killing process with pid 2295453 00:03:37.337 16:57:55 -- common/autotest_common.sh@973 -- # kill 2295453 00:03:37.337 16:57:55 -- common/autotest_common.sh@978 -- # wait 2295453 00:03:39.867 16:57:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:39.867 16:57:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:39.867 16:57:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.867 16:57:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:39.867 16:57:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:39.867 16:57:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.867 16:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.867 16:57:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:39.867 16:57:57 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.867 16:57:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.867 16:57:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.867 16:57:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.867 ************************************ 00:03:39.867 START TEST env 00:03:39.867 ************************************ 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.867 * Looking for test storage... 00:03:39.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:39.867 16:57:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.867 16:57:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.867 16:57:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.867 16:57:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.867 16:57:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.867 16:57:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.867 16:57:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.867 16:57:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.867 16:57:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.867 16:57:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.867 16:57:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.867 16:57:57 env -- scripts/common.sh@344 -- # case "$op" in 00:03:39.867 16:57:57 env -- scripts/common.sh@345 -- # : 1 00:03:39.867 16:57:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.867 16:57:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.867 16:57:57 env -- scripts/common.sh@365 -- # decimal 1 00:03:39.867 16:57:57 env -- scripts/common.sh@353 -- # local d=1 00:03:39.867 16:57:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.867 16:57:57 env -- scripts/common.sh@355 -- # echo 1 00:03:39.867 16:57:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.867 16:57:57 env -- scripts/common.sh@366 -- # decimal 2 00:03:39.867 16:57:57 env -- scripts/common.sh@353 -- # local d=2 00:03:39.867 16:57:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.867 16:57:57 env -- scripts/common.sh@355 -- # echo 2 00:03:39.867 16:57:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.867 16:57:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.867 16:57:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.867 16:57:57 env -- scripts/common.sh@368 -- # return 0 00:03:39.867 16:57:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:39.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.868 --rc genhtml_branch_coverage=1 00:03:39.868 --rc genhtml_function_coverage=1 00:03:39.868 --rc genhtml_legend=1 00:03:39.868 --rc geninfo_all_blocks=1 00:03:39.868 --rc geninfo_unexecuted_blocks=1 00:03:39.868 00:03:39.868 ' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:39.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.868 --rc genhtml_branch_coverage=1 00:03:39.868 --rc genhtml_function_coverage=1 00:03:39.868 --rc genhtml_legend=1 00:03:39.868 --rc geninfo_all_blocks=1 00:03:39.868 --rc geninfo_unexecuted_blocks=1 00:03:39.868 00:03:39.868 ' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:39.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.868 --rc genhtml_branch_coverage=1 00:03:39.868 --rc genhtml_function_coverage=1 00:03:39.868 --rc genhtml_legend=1 00:03:39.868 --rc geninfo_all_blocks=1 00:03:39.868 --rc geninfo_unexecuted_blocks=1 00:03:39.868 00:03:39.868 ' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:39.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.868 --rc genhtml_branch_coverage=1 00:03:39.868 --rc genhtml_function_coverage=1 00:03:39.868 --rc genhtml_legend=1 00:03:39.868 --rc geninfo_all_blocks=1 00:03:39.868 --rc geninfo_unexecuted_blocks=1 00:03:39.868 00:03:39.868 ' 00:03:39.868 16:57:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.868 16:57:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.868 ************************************ 00:03:39.868 START TEST env_memory 00:03:39.868 ************************************ 00:03:39.868 16:57:57 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.868 00:03:39.868 00:03:39.868 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.868 http://cunit.sourceforge.net/ 00:03:39.868 00:03:39.868 00:03:39.868 Suite: memory 00:03:39.868 Test: alloc and free memory map ...[2024-11-20 16:57:57.655023] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:39.868 passed 00:03:39.868 Test: mem map translation ...[2024-11-20 16:57:57.673743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:39.868 [2024-11-20 16:57:57.673757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:39.868 [2024-11-20 16:57:57.673790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:39.868 [2024-11-20 16:57:57.673811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:39.868 passed 00:03:39.868 Test: mem map registration ...[2024-11-20 16:57:57.709582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:39.868 [2024-11-20 16:57:57.709595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:39.868 passed 00:03:39.868 Test: mem map adjacent registrations ...passed 00:03:39.868 00:03:39.868 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.868 suites 1 1 n/a 0 0 00:03:39.868 tests 4 4 4 0 0 00:03:39.868 asserts 152 152 152 0 n/a 00:03:39.868 00:03:39.868 Elapsed time = 0.136 seconds 00:03:39.868 00:03:39.868 real 0m0.148s 00:03:39.868 user 0m0.139s 00:03:39.868 sys 0m0.009s 00:03:39.868 16:57:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:39.868 16:57:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:39.868 ************************************ 00:03:39.868 END TEST env_memory 00:03:39.868 ************************************ 00:03:39.868 16:57:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:39.868 16:57:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:39.868 16:57:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.868 ************************************ 00:03:39.868 START TEST env_vtophys 00:03:39.868 ************************************ 00:03:39.868 16:57:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.868 EAL: lib.eal log level changed from notice to debug 00:03:39.868 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.868 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.868 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.868 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.868 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.868 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.868 EAL: Detected lcore 6 as core 6 on socket 0 00:03:39.868 EAL: Detected lcore 7 as core 8 on socket 0 00:03:39.868 EAL: Detected lcore 8 as core 9 on socket 0 00:03:39.868 EAL: Detected lcore 9 as core 10 on socket 0 00:03:39.868 EAL: Detected lcore 10 as core 11 on socket 0 00:03:39.868 EAL: Detected lcore 11 as core 12 on socket 0 00:03:39.868 EAL: Detected lcore 12 as core 13 on socket 0 00:03:39.868 EAL: Detected lcore 13 as core 16 on socket 0 00:03:39.868 EAL: Detected lcore 14 as core 17 on socket 0 00:03:39.868 EAL: Detected lcore 15 as core 18 on socket 0 00:03:39.868 EAL: Detected lcore 16 as core 19 on socket 0 00:03:39.868 EAL: Detected lcore 17 as core 20 on socket 0 00:03:39.868 EAL: Detected lcore 18 as core 21 on socket 0 00:03:39.868 EAL: Detected lcore 19 as core 25 on socket 0 00:03:39.868 EAL: Detected lcore 20 as core 26 on socket 0 00:03:39.868 EAL: Detected lcore 21 as core 27 on socket 0 00:03:39.868 EAL: Detected lcore 22 as core 28 on socket 0 00:03:39.868 EAL: Detected lcore 23 as core 29 on socket 0 00:03:39.868 EAL: Detected lcore 24 as core 0 on socket 1 00:03:39.868 EAL: Detected lcore 25 as core 1 on socket 1 00:03:39.868 EAL: Detected lcore 26 as core 2 on socket 1 00:03:39.868 EAL: Detected lcore 27 as core 3 on socket 1 00:03:39.868 EAL: Detected lcore 28 as core 4 on socket 1 00:03:39.868 EAL: Detected lcore 29 as core 5 on socket 1 00:03:39.868 EAL: Detected lcore 30 as core 6 on socket 1 00:03:39.868 EAL: Detected lcore 31 as core 8 on socket 1 00:03:39.868 EAL: Detected lcore 32 as core 10 on socket 1 00:03:39.868 EAL: Detected lcore 33 as core 11 on socket 1 00:03:39.868 EAL: Detected lcore 34 as core 12 on socket 1 00:03:39.868 EAL: Detected lcore 35 as core 13 on socket 1 00:03:39.868 EAL: Detected lcore 36 as core 16 on socket 1 00:03:39.868 EAL: Detected lcore 37 as core 17 on socket 1 00:03:39.868 EAL: Detected lcore 38 as core 18 on socket 1 00:03:39.868 EAL: Detected lcore 39 as core 19 on socket 1 00:03:39.868 EAL: Detected lcore 40 as core 20 on socket 1 00:03:39.868 EAL: Detected lcore 41 as core 21 on socket 1 00:03:39.868 EAL: Detected lcore 42 as core 24 on socket 1 00:03:39.868 EAL: Detected lcore 43 as core 25 on socket 1 00:03:39.868 EAL: Detected lcore 44 as core 26 on socket 1 00:03:39.868 EAL: Detected lcore 45 as core 27 on socket 1 00:03:39.868 EAL: Detected lcore 46 as core 28 on socket 1 00:03:39.868 EAL: Detected lcore 47 as core 29 on socket 1 00:03:39.868 EAL: Detected lcore 48 as core 0 on socket 0 00:03:39.868 EAL: Detected lcore 49 as core 1 on socket 0 00:03:39.868 EAL: Detected lcore 50 as core 2 on socket 0 00:03:39.868 EAL: Detected lcore 51 as core 3 on socket 0 00:03:39.868 EAL: Detected lcore 52 as core 4 on socket 0 00:03:39.868 EAL: Detected lcore 53 as core 5 on socket 0 00:03:39.868 EAL: Detected lcore 54 as core 6 on socket 0 00:03:39.868 EAL: Detected lcore 55 as core 8 on socket 0 00:03:39.868 EAL: Detected lcore 56 as core 9 on socket 0 00:03:39.868 EAL: Detected lcore 57 as core 10 on socket 0 00:03:39.868 EAL: Detected lcore 58 as core 11 on socket 0 00:03:39.868 EAL: Detected lcore 59 as core 12 on socket 0 00:03:39.868 EAL: Detected lcore 60 as core 13 on socket 0 00:03:39.868 EAL: Detected lcore 61 as core 16 on socket 0 00:03:39.868 EAL: Detected lcore 62 as core 17 on socket 0 00:03:39.868 EAL: Detected lcore 63 as core 18 on socket 0 00:03:39.868 EAL: Detected lcore 64 as core 19 on socket 0 00:03:39.868 EAL: Detected lcore 65 as core 20 on socket 0 00:03:39.868 EAL: Detected lcore 66 as core 21 on socket 0 00:03:39.868 EAL: Detected lcore 67 as core 25 on socket 0 00:03:39.868 EAL: Detected lcore 68 as core 26 on socket 0 00:03:39.868 EAL: Detected lcore 69 as core 27 on socket 0 00:03:39.868 EAL: Detected lcore 70 as core 28 on socket 0 00:03:39.868 EAL: Detected lcore 71 as core 29 on socket 0 00:03:39.868 EAL: Detected lcore 72 as core 0 on socket 1 00:03:39.868 EAL: Detected lcore 73 as core 1 on socket 1 00:03:39.868 EAL: Detected lcore 74 as core 2 on socket 1 00:03:39.868 EAL: Detected lcore 75 as core 3 on socket 1 00:03:39.868 EAL: Detected lcore 76 as core 4 on socket 1 00:03:39.868 EAL: Detected lcore 77 as core 5 on socket 1 00:03:39.868 EAL: Detected lcore 78 as core 6 on socket 1 00:03:39.868 EAL: Detected lcore 79 as core 8 on socket 1 00:03:39.868 EAL: Detected lcore 80 as core 10 on socket 1 00:03:39.868 EAL: Detected lcore 81 as core 11 on socket 1 00:03:39.868 EAL: Detected lcore 82 as core 12 on socket 1 00:03:39.868 EAL: Detected lcore 83 as core 13 on socket 1 00:03:39.868 EAL: Detected lcore 84 as core 16 on socket 1 00:03:39.868 EAL: Detected lcore 85 as core 17 on socket 1 00:03:39.868 EAL: Detected lcore 86 as core 18 on socket 1 00:03:39.868 EAL: Detected lcore 87 as core 19 on socket 1 00:03:39.868 EAL: Detected lcore 88 as core 20 on socket 1 00:03:39.868 EAL: Detected lcore 89 as core 21 on socket 1 00:03:39.868 EAL: Detected lcore 90 as core 24 on socket 1 00:03:39.868 EAL: Detected lcore 91 as core 25 on socket 1 00:03:39.868 EAL: Detected lcore 92 as core 26 on socket 1 00:03:39.868 EAL: Detected lcore 93 as core 27 on socket 1 00:03:39.868 EAL: Detected lcore 94 as core 28 on socket 1 00:03:39.868 EAL: Detected lcore 95 as core 29 on socket 1 00:03:39.868 EAL: Maximum logical cores by configuration: 128 00:03:39.868 EAL: Detected CPU lcores: 96 00:03:39.868 EAL: Detected NUMA nodes: 2 00:03:39.868 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.868 EAL: Detected shared linkage of DPDK 00:03:39.868 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.868 EAL: Bus pci wants IOVA as 'DC' 00:03:39.868 EAL: Buses did not request a specific IOVA mode. 00:03:39.868 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.868 EAL: Selected IOVA mode 'VA' 00:03:39.868 EAL: Probing VFIO support... 00:03:39.868 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.868 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.868 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.868 EAL: VFIO support initialized 00:03:39.868 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.868 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.868 EAL: Setting up physically contiguous memory... 00:03:39.868 EAL: Setting maximum number of open files to 524288 00:03:39.868 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.868 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.868 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.868 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.868 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.868 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.868 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.868 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.868 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.868 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.868 EAL: Hugepages will be freed exactly as allocated. 00:03:39.868 EAL: No shared files mode enabled, IPC is disabled 00:03:39.869 EAL: No shared files mode enabled, IPC is disabled 00:03:39.869 EAL: TSC frequency is ~2100000 KHz 00:03:39.869 EAL: Main lcore 0 is ready (tid=7fb125f68a00;cpuset=[0]) 00:03:39.869 EAL: Trying to obtain current memory policy. 00:03:39.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.869 EAL: Restoring previous memory policy: 0 00:03:39.869 EAL: request: mp_malloc_sync 00:03:39.869 EAL: No shared files mode enabled, IPC is disabled 00:03:39.869 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.869 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:40.127 EAL: Mem event callback 'spdk:(nil)' registered 00:03:40.127 00:03:40.127 00:03:40.127 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.127 http://cunit.sourceforge.net/ 00:03:40.127 00:03:40.127 00:03:40.127 Suite: components_suite 00:03:40.127 Test: vtophys_malloc_test ...passed 00:03:40.127 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 4MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 4MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 6MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 6MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 10MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 10MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 18MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 18MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 34MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 34MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 66MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 66MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 130MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 130MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.127 EAL: Restoring previous memory policy: 4 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was expanded by 258MB 00:03:40.127 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.127 EAL: request: mp_malloc_sync 00:03:40.127 EAL: No shared files mode enabled, IPC is disabled 00:03:40.127 EAL: Heap on socket 0 was shrunk by 258MB 00:03:40.127 EAL: Trying to obtain current memory policy. 00:03:40.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.386 EAL: Restoring previous memory policy: 4 00:03:40.386 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.386 EAL: request: mp_malloc_sync 00:03:40.386 EAL: No shared files mode enabled, IPC is disabled 00:03:40.386 EAL: Heap on socket 0 was expanded by 514MB 00:03:40.386 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.645 EAL: request: mp_malloc_sync 00:03:40.645 EAL: No shared files mode enabled, IPC is disabled 00:03:40.645 EAL: Heap on socket 0 was shrunk by 514MB 00:03:40.645 EAL: Trying to obtain current memory policy. 00:03:40.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.645 EAL: Restoring previous memory policy: 4 00:03:40.645 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.645 EAL: request: mp_malloc_sync 00:03:40.645 EAL: No shared files mode enabled, IPC is disabled 00:03:40.645 EAL: Heap on socket 0 was expanded by 1026MB 00:03:40.903 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.903 EAL: request: mp_malloc_sync 00:03:40.903 EAL: No shared files mode enabled, IPC is disabled 00:03:40.903 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.903 passed 00:03:40.903 00:03:40.903 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.903 suites 1 1 n/a 0 0 00:03:40.903 tests 2 2 2 0 0 00:03:40.903 asserts 497 497 497 0 n/a 00:03:40.903 00:03:40.903 Elapsed time = 0.975 seconds 00:03:40.903 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.903 EAL: request: mp_malloc_sync 00:03:40.903 EAL: No shared files mode enabled, IPC is disabled 00:03:40.903 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.903 EAL: No shared files mode enabled, IPC is disabled 00:03:40.903 EAL: No shared files mode enabled, IPC is disabled 00:03:40.903 EAL: No shared files mode enabled, IPC is disabled 00:03:40.903 00:03:40.903 real 0m1.109s 00:03:40.903 user 0m0.643s 00:03:40.903 sys 0m0.434s 00:03:40.903 16:57:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.903 16:57:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.903 ************************************ 00:03:40.903 END TEST env_vtophys 00:03:40.903 ************************************ 00:03:41.162 16:57:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:41.162 16:57:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.162 16:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.162 16:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.162 ************************************ 00:03:41.162 START TEST env_pci 00:03:41.162 ************************************ 00:03:41.162 16:57:59 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:41.162 00:03:41.162 00:03:41.162 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.162 http://cunit.sourceforge.net/ 00:03:41.162 00:03:41.162 00:03:41.162 Suite: pci 00:03:41.162 Test: pci_hook ...[2024-11-20 16:57:59.028097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2296785 has claimed it 00:03:41.162 EAL: Cannot find device (10000:00:01.0) 00:03:41.162 EAL: Failed to attach device on primary process 00:03:41.162 passed 00:03:41.162 00:03:41.162 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.162 suites 1 1 n/a 0 0 00:03:41.162 tests 1 1 1 0 0 00:03:41.162 asserts 25 25 25 0 n/a 00:03:41.162 00:03:41.162 Elapsed time = 0.026 seconds 00:03:41.162 00:03:41.162 real 0m0.046s 00:03:41.162 user 0m0.012s 00:03:41.162 sys 0m0.034s 00:03:41.162 16:57:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.162 16:57:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:41.162 ************************************ 00:03:41.162 END TEST env_pci 00:03:41.162 ************************************ 00:03:41.162 16:57:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:41.162 16:57:59 env -- env/env.sh@15 -- # uname 00:03:41.162 16:57:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:41.162 16:57:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:41.162 16:57:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.162 16:57:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:41.162 16:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.162 16:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.162 ************************************ 00:03:41.162 START TEST env_dpdk_post_init 00:03:41.162 ************************************ 00:03:41.162 16:57:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:41.162 EAL: Detected CPU lcores: 96 00:03:41.162 EAL: Detected NUMA nodes: 2 00:03:41.162 EAL: Detected shared linkage of DPDK 00:03:41.162 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.162 EAL: Selected IOVA mode 'VA' 00:03:41.162 EAL: VFIO support initialized 00:03:41.162 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.420 EAL: Using IOMMU type 1 (Type 1) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:41.420 EAL: Ignore mapping IO port bar(1) 00:03:41.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:42.356 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:42.356 EAL: Ignore mapping IO port bar(1) 00:03:42.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:45.638 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:45.638 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:46.252 Starting DPDK initialization... 00:03:46.252 Starting SPDK post initialization... 00:03:46.252 SPDK NVMe probe 00:03:46.252 Attaching to 0000:5e:00.0 00:03:46.252 Attached to 0000:5e:00.0 00:03:46.252 Cleaning up... 00:03:46.252 00:03:46.252 real 0m4.847s 00:03:46.252 user 0m3.403s 00:03:46.252 sys 0m0.510s 00:03:46.252 16:58:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.252 16:58:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.252 ************************************ 00:03:46.252 END TEST env_dpdk_post_init 00:03:46.252 ************************************ 00:03:46.252 16:58:04 env -- env/env.sh@26 -- # uname 00:03:46.252 16:58:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.252 16:58:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.252 16:58:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.252 16:58:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.252 16:58:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.252 ************************************ 00:03:46.252 START TEST env_mem_callbacks 00:03:46.252 ************************************ 00:03:46.252 16:58:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.252 EAL: Detected CPU lcores: 96 00:03:46.252 EAL: Detected NUMA nodes: 2 00:03:46.252 EAL: Detected shared linkage of DPDK 00:03:46.252 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.252 EAL: Selected IOVA mode 'VA' 00:03:46.252 EAL: VFIO support initialized 00:03:46.252 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.252 00:03:46.252 00:03:46.252 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.252 http://cunit.sourceforge.net/ 00:03:46.252 00:03:46.252 00:03:46.252 Suite: memory 00:03:46.252 Test: test ... 00:03:46.252 register 0x200000200000 2097152 00:03:46.252 malloc 3145728 00:03:46.252 register 0x200000400000 4194304 00:03:46.252 buf 0x200000500000 len 3145728 PASSED 00:03:46.252 malloc 64 00:03:46.252 buf 0x2000004fff40 len 64 PASSED 00:03:46.252 malloc 4194304 00:03:46.252 register 0x200000800000 6291456 00:03:46.252 buf 0x200000a00000 len 4194304 PASSED 00:03:46.252 free 0x200000500000 3145728 00:03:46.252 free 0x2000004fff40 64 00:03:46.252 unregister 0x200000400000 4194304 PASSED 00:03:46.252 free 0x200000a00000 4194304 00:03:46.252 unregister 0x200000800000 6291456 PASSED 00:03:46.252 malloc 8388608 00:03:46.252 register 0x200000400000 10485760 00:03:46.252 buf 0x200000600000 len 8388608 PASSED 00:03:46.252 free 0x200000600000 8388608 00:03:46.252 unregister 0x200000400000 10485760 PASSED 00:03:46.252 passed 00:03:46.252 00:03:46.252 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.252 suites 1 1 n/a 0 0 00:03:46.252 tests 1 1 1 0 0 00:03:46.252 asserts 15 15 15 0 n/a 00:03:46.252 00:03:46.252 Elapsed time = 0.008 seconds 00:03:46.252 00:03:46.252 real 0m0.057s 00:03:46.252 user 0m0.017s 00:03:46.252 sys 0m0.040s 00:03:46.252 16:58:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.252 16:58:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.252 ************************************ 00:03:46.252 END TEST env_mem_callbacks 00:03:46.252 ************************************ 00:03:46.252 00:03:46.252 real 0m6.739s 00:03:46.252 user 0m4.450s 00:03:46.252 sys 0m1.360s 00:03:46.252 16:58:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.252 16:58:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.252 ************************************ 00:03:46.252 END TEST env 00:03:46.252 ************************************ 00:03:46.252 16:58:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.252 16:58:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.252 16:58:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.252 16:58:04 -- common/autotest_common.sh@10 -- # set +x 00:03:46.252 ************************************ 00:03:46.252 START TEST rpc 00:03:46.252 ************************************ 00:03:46.252 16:58:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.560 * Looking for test storage... 00:03:46.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.560 16:58:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.560 16:58:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.560 16:58:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.560 16:58:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.560 16:58:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.560 16:58:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.560 16:58:04 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.560 16:58:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.560 16:58:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.560 16:58:04 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.560 16:58:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.560 16:58:04 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.560 16:58:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.560 16:58:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.560 16:58:04 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:46.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.560 --rc genhtml_branch_coverage=1 00:03:46.560 --rc genhtml_function_coverage=1 00:03:46.560 --rc genhtml_legend=1 00:03:46.560 --rc geninfo_all_blocks=1 00:03:46.560 --rc geninfo_unexecuted_blocks=1 00:03:46.560 00:03:46.560 ' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:46.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.560 --rc genhtml_branch_coverage=1 00:03:46.560 --rc genhtml_function_coverage=1 00:03:46.560 --rc genhtml_legend=1 00:03:46.560 --rc geninfo_all_blocks=1 00:03:46.560 --rc geninfo_unexecuted_blocks=1 00:03:46.560 00:03:46.560 ' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:46.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.560 --rc genhtml_branch_coverage=1 00:03:46.560 --rc genhtml_function_coverage=1 00:03:46.560 --rc genhtml_legend=1 00:03:46.560 --rc geninfo_all_blocks=1 00:03:46.560 --rc geninfo_unexecuted_blocks=1 00:03:46.560 00:03:46.560 ' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:46.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.560 --rc genhtml_branch_coverage=1 00:03:46.560 --rc genhtml_function_coverage=1 00:03:46.560 --rc genhtml_legend=1 00:03:46.560 --rc geninfo_all_blocks=1 00:03:46.560 --rc geninfo_unexecuted_blocks=1 00:03:46.560 00:03:46.560 ' 00:03:46.560 16:58:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2297835 00:03:46.560 16:58:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:46.560 16:58:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.560 16:58:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2297835 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 2297835 ']' 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.560 16:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.560 [2024-11-20 16:58:04.445746] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:03:46.561 [2024-11-20 16:58:04.445795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297835 ] 00:03:46.561 [2024-11-20 16:58:04.520739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.561 [2024-11-20 16:58:04.559241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.561 [2024-11-20 16:58:04.559279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2297835' to capture a snapshot of events at runtime. 00:03:46.561 [2024-11-20 16:58:04.559287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:46.561 [2024-11-20 16:58:04.559293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:46.561 [2024-11-20 16:58:04.559297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2297835 for offline analysis/debug. 00:03:46.561 [2024-11-20 16:58:04.559848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.567 16:58:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.567 16:58:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:47.567 16:58:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.567 16:58:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.567 16:58:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.567 16:58:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.567 16:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.567 16:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.567 16:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 ************************************ 00:03:47.567 START TEST rpc_integrity 00:03:47.567 ************************************ 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.567 { 00:03:47.567 "name": "Malloc0", 00:03:47.567 "aliases": [ 00:03:47.567 "ca322bb9-d368-4e4c-8f1a-3b3191467457" 00:03:47.567 ], 00:03:47.567 "product_name": "Malloc disk", 00:03:47.567 "block_size": 512, 00:03:47.567 "num_blocks": 16384, 00:03:47.567 "uuid": "ca322bb9-d368-4e4c-8f1a-3b3191467457", 00:03:47.567 "assigned_rate_limits": { 00:03:47.567 "rw_ios_per_sec": 0, 00:03:47.567 "rw_mbytes_per_sec": 0, 00:03:47.567 "r_mbytes_per_sec": 0, 00:03:47.567 "w_mbytes_per_sec": 0 00:03:47.567 }, 00:03:47.567 "claimed": false, 00:03:47.567 "zoned": false, 00:03:47.567 "supported_io_types": { 00:03:47.567 "read": true, 00:03:47.567 "write": true, 00:03:47.567 "unmap": true, 00:03:47.567 "flush": true, 00:03:47.567 "reset": true, 00:03:47.567 "nvme_admin": false, 00:03:47.567 "nvme_io": false, 00:03:47.567 "nvme_io_md": false, 00:03:47.567 "write_zeroes": true, 00:03:47.567 "zcopy": true, 00:03:47.567 "get_zone_info": false, 00:03:47.567 "zone_management": false, 00:03:47.567 "zone_append": false, 00:03:47.567 "compare": false, 00:03:47.567 "compare_and_write": false, 00:03:47.567 "abort": true, 00:03:47.567 "seek_hole": false, 00:03:47.567 "seek_data": false, 00:03:47.567 "copy": true, 00:03:47.567 "nvme_iov_md": false 00:03:47.567 }, 00:03:47.567 "memory_domains": [ 00:03:47.567 { 00:03:47.567 "dma_device_id": "system", 00:03:47.567 "dma_device_type": 1 00:03:47.567 }, 00:03:47.567 { 00:03:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.567 "dma_device_type": 2 00:03:47.567 } 00:03:47.567 ], 00:03:47.567 "driver_specific": {} 00:03:47.567 } 00:03:47.567 ]' 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 [2024-11-20 16:58:05.438328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.567 [2024-11-20 16:58:05.438359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.567 [2024-11-20 16:58:05.438371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd9280 00:03:47.567 [2024-11-20 16:58:05.438377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.567 [2024-11-20 16:58:05.439448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.567 [2024-11-20 16:58:05.439468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.567 Passthru0 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.567 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.567 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.567 { 00:03:47.567 "name": "Malloc0", 00:03:47.567 "aliases": [ 00:03:47.567 "ca322bb9-d368-4e4c-8f1a-3b3191467457" 00:03:47.567 ], 00:03:47.567 "product_name": "Malloc disk", 00:03:47.567 "block_size": 512, 00:03:47.567 "num_blocks": 16384, 00:03:47.567 "uuid": "ca322bb9-d368-4e4c-8f1a-3b3191467457", 00:03:47.567 "assigned_rate_limits": { 00:03:47.567 "rw_ios_per_sec": 0, 00:03:47.567 "rw_mbytes_per_sec": 0, 00:03:47.567 "r_mbytes_per_sec": 0, 00:03:47.567 "w_mbytes_per_sec": 0 00:03:47.567 }, 00:03:47.567 "claimed": true, 00:03:47.567 "claim_type": "exclusive_write", 00:03:47.567 "zoned": false, 00:03:47.567 "supported_io_types": { 00:03:47.567 "read": true, 00:03:47.567 "write": true, 00:03:47.567 "unmap": true, 00:03:47.567 "flush": true, 00:03:47.567 "reset": true, 00:03:47.567 "nvme_admin": false, 00:03:47.567 "nvme_io": false, 00:03:47.567 "nvme_io_md": false, 00:03:47.567 "write_zeroes": true, 00:03:47.567 "zcopy": true, 00:03:47.567 "get_zone_info": false, 00:03:47.567 "zone_management": false, 00:03:47.567 "zone_append": false, 00:03:47.567 "compare": false, 00:03:47.567 "compare_and_write": false, 00:03:47.567 "abort": true, 00:03:47.567 "seek_hole": false, 00:03:47.567 "seek_data": false, 00:03:47.567 "copy": true, 00:03:47.567 "nvme_iov_md": false 00:03:47.567 }, 00:03:47.567 "memory_domains": [ 00:03:47.567 { 00:03:47.567 "dma_device_id": "system", 00:03:47.567 "dma_device_type": 1 00:03:47.567 }, 00:03:47.567 { 00:03:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.567 "dma_device_type": 2 00:03:47.567 } 00:03:47.567 ], 00:03:47.567 "driver_specific": {} 00:03:47.567 }, 00:03:47.567 { 00:03:47.567 "name": "Passthru0", 00:03:47.567 "aliases": [ 00:03:47.567 "35858964-594a-5222-8f1d-f7aaf03b4168" 00:03:47.567 ], 00:03:47.567 "product_name": "passthru", 00:03:47.567 "block_size": 512, 00:03:47.567 "num_blocks": 16384, 00:03:47.567 "uuid": "35858964-594a-5222-8f1d-f7aaf03b4168", 00:03:47.567 "assigned_rate_limits": { 00:03:47.567 "rw_ios_per_sec": 0, 00:03:47.567 "rw_mbytes_per_sec": 0, 00:03:47.568 "r_mbytes_per_sec": 0, 00:03:47.568 "w_mbytes_per_sec": 0 00:03:47.568 }, 00:03:47.568 "claimed": false, 00:03:47.568 "zoned": false, 00:03:47.568 "supported_io_types": { 00:03:47.568 "read": true, 00:03:47.568 "write": true, 00:03:47.568 "unmap": true, 00:03:47.568 "flush": true, 00:03:47.568 "reset": true, 00:03:47.568 "nvme_admin": false, 00:03:47.568 "nvme_io": false, 00:03:47.568 "nvme_io_md": false, 00:03:47.568 "write_zeroes": true, 00:03:47.568 "zcopy": true, 00:03:47.568 "get_zone_info": false, 00:03:47.568 "zone_management": false, 00:03:47.568 "zone_append": false, 00:03:47.568 "compare": false, 00:03:47.568 "compare_and_write": false, 00:03:47.568 "abort": true, 00:03:47.568 "seek_hole": false, 00:03:47.568 "seek_data": false, 00:03:47.568 "copy": true, 00:03:47.568 "nvme_iov_md": false 00:03:47.568 }, 00:03:47.568 "memory_domains": [ 00:03:47.568 { 00:03:47.568 "dma_device_id": "system", 00:03:47.568 "dma_device_type": 1 00:03:47.568 }, 00:03:47.568 { 00:03:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.568 "dma_device_type": 2 00:03:47.568 } 00:03:47.568 ], 00:03:47.568 "driver_specific": { 00:03:47.568 "passthru": { 00:03:47.568 "name": "Passthru0", 00:03:47.568 "base_bdev_name": "Malloc0" 00:03:47.568 } 00:03:47.568 } 00:03:47.568 } 00:03:47.568 ]' 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.568 16:58:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.568 00:03:47.568 real 0m0.282s 00:03:47.568 user 0m0.182s 00:03:47.568 sys 0m0.038s 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.568 16:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.568 ************************************ 00:03:47.568 END TEST rpc_integrity 00:03:47.568 ************************************ 00:03:47.826 16:58:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.826 16:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.826 16:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.826 16:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.826 ************************************ 00:03:47.826 START TEST rpc_plugins 00:03:47.826 ************************************ 00:03:47.826 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:47.826 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.826 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.827 { 00:03:47.827 "name": "Malloc1", 00:03:47.827 "aliases": [ 00:03:47.827 "9cfa29be-9c28-46c7-ab0c-84e206d98293" 00:03:47.827 ], 00:03:47.827 "product_name": "Malloc disk", 00:03:47.827 "block_size": 4096, 00:03:47.827 "num_blocks": 256, 00:03:47.827 "uuid": "9cfa29be-9c28-46c7-ab0c-84e206d98293", 00:03:47.827 "assigned_rate_limits": { 00:03:47.827 "rw_ios_per_sec": 0, 00:03:47.827 "rw_mbytes_per_sec": 0, 00:03:47.827 "r_mbytes_per_sec": 0, 00:03:47.827 "w_mbytes_per_sec": 0 00:03:47.827 }, 00:03:47.827 "claimed": false, 00:03:47.827 "zoned": false, 00:03:47.827 "supported_io_types": { 00:03:47.827 "read": true, 00:03:47.827 "write": true, 00:03:47.827 "unmap": true, 00:03:47.827 "flush": true, 00:03:47.827 "reset": true, 00:03:47.827 "nvme_admin": false, 00:03:47.827 "nvme_io": false, 00:03:47.827 "nvme_io_md": false, 00:03:47.827 "write_zeroes": true, 00:03:47.827 "zcopy": true, 00:03:47.827 "get_zone_info": false, 00:03:47.827 "zone_management": false, 00:03:47.827 "zone_append": false, 00:03:47.827 "compare": false, 00:03:47.827 "compare_and_write": false, 00:03:47.827 "abort": true, 00:03:47.827 "seek_hole": false, 00:03:47.827 "seek_data": false, 00:03:47.827 "copy": true, 00:03:47.827 "nvme_iov_md": false 00:03:47.827 }, 00:03:47.827 "memory_domains": [ 00:03:47.827 { 00:03:47.827 "dma_device_id": "system", 00:03:47.827 "dma_device_type": 1 00:03:47.827 }, 00:03:47.827 { 00:03:47.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.827 "dma_device_type": 2 00:03:47.827 } 00:03:47.827 ], 00:03:47.827 "driver_specific": {} 00:03:47.827 } 00:03:47.827 ]' 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.827 16:58:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.827 00:03:47.827 real 0m0.143s 00:03:47.827 user 0m0.083s 00:03:47.827 sys 0m0.024s 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.827 16:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 ************************************ 00:03:47.827 END TEST rpc_plugins 00:03:47.827 ************************************ 00:03:47.827 16:58:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.827 16:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.827 16:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.827 16:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.827 ************************************ 00:03:47.827 START TEST rpc_trace_cmd_test 00:03:47.827 ************************************ 00:03:47.827 16:58:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:47.827 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.085 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.085 16:58:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.085 16:58:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.085 16:58:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.085 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.085 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2297835", 00:03:48.085 "tpoint_group_mask": "0x8", 00:03:48.085 "iscsi_conn": { 00:03:48.085 "mask": "0x2", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "scsi": { 00:03:48.085 "mask": "0x4", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "bdev": { 00:03:48.085 "mask": "0x8", 00:03:48.085 "tpoint_mask": "0xffffffffffffffff" 00:03:48.085 }, 00:03:48.085 "nvmf_rdma": { 00:03:48.085 "mask": "0x10", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "nvmf_tcp": { 00:03:48.085 "mask": "0x20", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "ftl": { 00:03:48.085 "mask": "0x40", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "blobfs": { 00:03:48.085 "mask": "0x80", 00:03:48.085 "tpoint_mask": "0x0" 00:03:48.085 }, 00:03:48.085 "dsa": { 00:03:48.085 "mask": "0x200", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "thread": { 00:03:48.086 "mask": "0x400", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "nvme_pcie": { 00:03:48.086 "mask": "0x800", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "iaa": { 00:03:48.086 "mask": "0x1000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "nvme_tcp": { 00:03:48.086 "mask": "0x2000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "bdev_nvme": { 00:03:48.086 "mask": "0x4000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "sock": { 00:03:48.086 "mask": "0x8000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "blob": { 00:03:48.086 "mask": "0x10000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "bdev_raid": { 00:03:48.086 "mask": "0x20000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 }, 00:03:48.086 "scheduler": { 00:03:48.086 "mask": "0x40000", 00:03:48.086 "tpoint_mask": "0x0" 00:03:48.086 } 00:03:48.086 }' 00:03:48.086 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.086 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.086 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.086 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.086 16:58:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.086 00:03:48.086 real 0m0.210s 00:03:48.086 user 0m0.176s 00:03:48.086 sys 0m0.026s 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.086 16:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.086 ************************************ 00:03:48.086 END TEST rpc_trace_cmd_test 00:03:48.086 ************************************ 00:03:48.086 16:58:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.086 16:58:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.086 16:58:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.086 16:58:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.086 16:58:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.086 16:58:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 ************************************ 00:03:48.345 START TEST rpc_daemon_integrity 00:03:48.345 ************************************ 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.345 { 00:03:48.345 "name": "Malloc2", 00:03:48.345 "aliases": [ 00:03:48.345 "f0937c68-28d4-419c-9dc4-64d808b204ce" 00:03:48.345 ], 00:03:48.345 "product_name": "Malloc disk", 00:03:48.345 "block_size": 512, 00:03:48.345 "num_blocks": 16384, 00:03:48.345 "uuid": "f0937c68-28d4-419c-9dc4-64d808b204ce", 00:03:48.345 "assigned_rate_limits": { 00:03:48.345 "rw_ios_per_sec": 0, 00:03:48.345 "rw_mbytes_per_sec": 0, 00:03:48.345 "r_mbytes_per_sec": 0, 00:03:48.345 "w_mbytes_per_sec": 0 00:03:48.345 }, 00:03:48.345 "claimed": false, 00:03:48.345 "zoned": false, 00:03:48.345 "supported_io_types": { 00:03:48.345 "read": true, 00:03:48.345 "write": true, 00:03:48.345 "unmap": true, 00:03:48.345 "flush": true, 00:03:48.345 "reset": true, 00:03:48.345 "nvme_admin": false, 00:03:48.345 "nvme_io": false, 00:03:48.345 "nvme_io_md": false, 00:03:48.345 "write_zeroes": true, 00:03:48.345 "zcopy": true, 00:03:48.345 "get_zone_info": false, 00:03:48.345 "zone_management": false, 00:03:48.345 "zone_append": false, 00:03:48.345 "compare": false, 00:03:48.345 "compare_and_write": false, 00:03:48.345 "abort": true, 00:03:48.345 "seek_hole": false, 00:03:48.345 "seek_data": false, 00:03:48.345 "copy": true, 00:03:48.345 "nvme_iov_md": false 00:03:48.345 }, 00:03:48.345 "memory_domains": [ 00:03:48.345 { 00:03:48.345 "dma_device_id": "system", 00:03:48.345 "dma_device_type": 1 00:03:48.345 }, 00:03:48.345 { 00:03:48.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.345 "dma_device_type": 2 00:03:48.345 } 00:03:48.345 ], 00:03:48.345 "driver_specific": {} 00:03:48.345 } 00:03:48.345 ]' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 [2024-11-20 16:58:06.268582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.345 [2024-11-20 16:58:06.268609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.345 [2024-11-20 16:58:06.268621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fdb150 00:03:48.345 [2024-11-20 16:58:06.268627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.345 [2024-11-20 16:58:06.269599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.345 [2024-11-20 16:58:06.269618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.345 Passthru0 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.345 { 00:03:48.345 "name": "Malloc2", 00:03:48.345 "aliases": [ 00:03:48.345 "f0937c68-28d4-419c-9dc4-64d808b204ce" 00:03:48.345 ], 00:03:48.345 "product_name": "Malloc disk", 00:03:48.345 "block_size": 512, 00:03:48.345 "num_blocks": 16384, 00:03:48.345 "uuid": "f0937c68-28d4-419c-9dc4-64d808b204ce", 00:03:48.345 "assigned_rate_limits": { 00:03:48.345 "rw_ios_per_sec": 0, 00:03:48.345 "rw_mbytes_per_sec": 0, 00:03:48.345 "r_mbytes_per_sec": 0, 00:03:48.345 "w_mbytes_per_sec": 0 00:03:48.345 }, 00:03:48.345 "claimed": true, 00:03:48.345 "claim_type": "exclusive_write", 00:03:48.345 "zoned": false, 00:03:48.345 "supported_io_types": { 00:03:48.345 "read": true, 00:03:48.345 "write": true, 00:03:48.345 "unmap": true, 00:03:48.345 "flush": true, 00:03:48.345 "reset": true, 00:03:48.345 "nvme_admin": false, 00:03:48.345 "nvme_io": false, 00:03:48.345 "nvme_io_md": false, 00:03:48.345 "write_zeroes": true, 00:03:48.345 "zcopy": true, 00:03:48.345 "get_zone_info": false, 00:03:48.345 "zone_management": false, 00:03:48.345 "zone_append": false, 00:03:48.345 "compare": false, 00:03:48.345 "compare_and_write": false, 00:03:48.345 "abort": true, 00:03:48.345 "seek_hole": false, 00:03:48.345 "seek_data": false, 00:03:48.345 "copy": true, 00:03:48.345 "nvme_iov_md": false 00:03:48.345 }, 00:03:48.345 "memory_domains": [ 00:03:48.345 { 00:03:48.345 "dma_device_id": "system", 00:03:48.345 "dma_device_type": 1 00:03:48.345 }, 00:03:48.345 { 00:03:48.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.345 "dma_device_type": 2 00:03:48.345 } 00:03:48.345 ], 00:03:48.345 "driver_specific": {} 00:03:48.345 }, 00:03:48.345 { 00:03:48.345 "name": "Passthru0", 00:03:48.345 "aliases": [ 00:03:48.345 "c369eb92-421f-52a2-8ea9-0b4937e495b3" 00:03:48.345 ], 00:03:48.345 "product_name": "passthru", 00:03:48.345 "block_size": 512, 00:03:48.345 "num_blocks": 16384, 00:03:48.345 "uuid": "c369eb92-421f-52a2-8ea9-0b4937e495b3", 00:03:48.345 "assigned_rate_limits": { 00:03:48.345 "rw_ios_per_sec": 0, 00:03:48.345 "rw_mbytes_per_sec": 0, 00:03:48.345 "r_mbytes_per_sec": 0, 00:03:48.345 "w_mbytes_per_sec": 0 00:03:48.345 }, 00:03:48.345 "claimed": false, 00:03:48.345 "zoned": false, 00:03:48.345 "supported_io_types": { 00:03:48.345 "read": true, 00:03:48.345 "write": true, 00:03:48.345 "unmap": true, 00:03:48.345 "flush": true, 00:03:48.345 "reset": true, 00:03:48.345 "nvme_admin": false, 00:03:48.345 "nvme_io": false, 00:03:48.345 "nvme_io_md": false, 00:03:48.345 "write_zeroes": true, 00:03:48.345 "zcopy": true, 00:03:48.345 "get_zone_info": false, 00:03:48.345 "zone_management": false, 00:03:48.345 "zone_append": false, 00:03:48.345 "compare": false, 00:03:48.345 "compare_and_write": false, 00:03:48.345 "abort": true, 00:03:48.345 "seek_hole": false, 00:03:48.345 "seek_data": false, 00:03:48.345 "copy": true, 00:03:48.345 "nvme_iov_md": false 00:03:48.345 }, 00:03:48.345 "memory_domains": [ 00:03:48.345 { 00:03:48.345 "dma_device_id": "system", 00:03:48.345 "dma_device_type": 1 00:03:48.345 }, 00:03:48.345 { 00:03:48.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.345 "dma_device_type": 2 00:03:48.345 } 00:03:48.345 ], 00:03:48.345 "driver_specific": { 00:03:48.345 "passthru": { 00:03:48.345 "name": "Passthru0", 00:03:48.345 "base_bdev_name": "Malloc2" 00:03:48.345 } 00:03:48.345 } 00:03:48.345 } 00:03:48.345 ]' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.345 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.346 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.604 16:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.604 00:03:48.604 real 0m0.254s 00:03:48.604 user 0m0.156s 00:03:48.604 sys 0m0.038s 00:03:48.604 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.604 16:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.604 ************************************ 00:03:48.604 END TEST rpc_daemon_integrity 00:03:48.604 ************************************ 00:03:48.604 16:58:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.604 16:58:06 rpc -- rpc/rpc.sh@84 -- # killprocess 2297835 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 2297835 ']' 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@958 -- # kill -0 2297835 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@959 -- # uname 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297835 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297835' 00:03:48.604 killing process with pid 2297835 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@973 -- # kill 2297835 00:03:48.604 16:58:06 rpc -- common/autotest_common.sh@978 -- # wait 2297835 00:03:48.863 00:03:48.863 real 0m2.556s 00:03:48.863 user 0m3.235s 00:03:48.863 sys 0m0.746s 00:03:48.863 16:58:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.863 16:58:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.863 ************************************ 00:03:48.863 END TEST rpc 00:03:48.863 ************************************ 00:03:48.863 16:58:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:48.863 16:58:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.863 16:58:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.863 16:58:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.863 ************************************ 00:03:48.863 START TEST skip_rpc 00:03:48.863 ************************************ 00:03:48.863 16:58:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.122 * Looking for test storage... 00:03:49.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.122 16:58:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.122 16:58:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.122 16:58:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.122 16:58:07 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.122 16:58:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.123 16:58:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.123 --rc genhtml_branch_coverage=1 00:03:49.123 --rc genhtml_function_coverage=1 00:03:49.123 --rc genhtml_legend=1 00:03:49.123 --rc geninfo_all_blocks=1 00:03:49.123 --rc geninfo_unexecuted_blocks=1 00:03:49.123 00:03:49.123 ' 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.123 --rc genhtml_branch_coverage=1 00:03:49.123 --rc genhtml_function_coverage=1 00:03:49.123 --rc genhtml_legend=1 00:03:49.123 --rc geninfo_all_blocks=1 00:03:49.123 --rc geninfo_unexecuted_blocks=1 00:03:49.123 00:03:49.123 ' 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.123 --rc genhtml_branch_coverage=1 00:03:49.123 --rc genhtml_function_coverage=1 00:03:49.123 --rc genhtml_legend=1 00:03:49.123 --rc geninfo_all_blocks=1 00:03:49.123 --rc geninfo_unexecuted_blocks=1 00:03:49.123 00:03:49.123 ' 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.123 --rc genhtml_branch_coverage=1 00:03:49.123 --rc genhtml_function_coverage=1 00:03:49.123 --rc genhtml_legend=1 00:03:49.123 --rc geninfo_all_blocks=1 00:03:49.123 --rc geninfo_unexecuted_blocks=1 00:03:49.123 00:03:49.123 ' 00:03:49.123 16:58:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.123 16:58:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.123 16:58:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.123 16:58:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.123 ************************************ 00:03:49.123 START TEST skip_rpc 00:03:49.123 ************************************ 00:03:49.123 16:58:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:49.123 16:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2298478 00:03:49.123 16:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.123 16:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.123 16:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.123 [2024-11-20 16:58:07.101353] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:03:49.123 [2024-11-20 16:58:07.101390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2298478 ] 00:03:49.381 [2024-11-20 16:58:07.174886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.381 [2024-11-20 16:58:07.215119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:54.648 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2298478 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2298478 ']' 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2298478 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2298478 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2298478' 00:03:54.649 killing process with pid 2298478 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2298478 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2298478 00:03:54.649 00:03:54.649 real 0m5.368s 00:03:54.649 user 0m5.123s 00:03:54.649 sys 0m0.284s 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.649 16:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.649 ************************************ 00:03:54.649 END TEST skip_rpc 00:03:54.649 ************************************ 00:03:54.649 16:58:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:54.649 16:58:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.649 16:58:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.649 16:58:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.649 ************************************ 00:03:54.649 START TEST skip_rpc_with_json 00:03:54.649 ************************************ 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2299430 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2299430 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2299430 ']' 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.649 16:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.649 [2024-11-20 16:58:12.547285] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:03:54.649 [2024-11-20 16:58:12.547332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299430 ] 00:03:54.649 [2024-11-20 16:58:12.623560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.649 [2024-11-20 16:58:12.661682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.584 [2024-11-20 16:58:13.380865] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:55.584 request: 00:03:55.584 { 00:03:55.584 "trtype": "tcp", 00:03:55.584 "method": "nvmf_get_transports", 00:03:55.584 "req_id": 1 00:03:55.584 } 00:03:55.584 Got JSON-RPC error response 00:03:55.584 response: 00:03:55.584 { 00:03:55.584 "code": -19, 00:03:55.584 "message": "No such device" 00:03:55.584 } 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.584 [2024-11-20 16:58:13.392976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:55.584 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.584 { 00:03:55.584 "subsystems": [ 00:03:55.584 { 00:03:55.584 "subsystem": "fsdev", 00:03:55.584 "config": [ 00:03:55.584 { 00:03:55.584 "method": "fsdev_set_opts", 00:03:55.584 "params": { 00:03:55.584 "fsdev_io_pool_size": 65535, 00:03:55.584 "fsdev_io_cache_size": 256 00:03:55.584 } 00:03:55.584 } 00:03:55.584 ] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "vfio_user_target", 00:03:55.584 "config": null 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "keyring", 00:03:55.584 "config": [] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "iobuf", 00:03:55.584 "config": [ 00:03:55.584 { 00:03:55.584 "method": "iobuf_set_options", 00:03:55.584 "params": { 00:03:55.584 "small_pool_count": 8192, 00:03:55.584 "large_pool_count": 1024, 00:03:55.584 "small_bufsize": 8192, 00:03:55.584 "large_bufsize": 135168, 00:03:55.584 "enable_numa": false 00:03:55.584 } 00:03:55.584 } 00:03:55.584 ] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "sock", 00:03:55.584 "config": [ 00:03:55.584 { 00:03:55.584 "method": "sock_set_default_impl", 00:03:55.584 "params": { 00:03:55.584 "impl_name": "posix" 00:03:55.584 } 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "method": "sock_impl_set_options", 00:03:55.584 "params": { 00:03:55.584 "impl_name": "ssl", 00:03:55.584 "recv_buf_size": 4096, 00:03:55.584 "send_buf_size": 4096, 00:03:55.584 "enable_recv_pipe": true, 00:03:55.584 "enable_quickack": false, 00:03:55.584 "enable_placement_id": 0, 00:03:55.584 "enable_zerocopy_send_server": true, 00:03:55.584 "enable_zerocopy_send_client": false, 00:03:55.584 "zerocopy_threshold": 0, 00:03:55.584 "tls_version": 0, 00:03:55.584 "enable_ktls": false 00:03:55.584 } 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "method": "sock_impl_set_options", 00:03:55.584 "params": { 00:03:55.584 "impl_name": "posix", 00:03:55.584 "recv_buf_size": 2097152, 00:03:55.584 "send_buf_size": 2097152, 00:03:55.584 "enable_recv_pipe": true, 00:03:55.584 "enable_quickack": false, 00:03:55.584 "enable_placement_id": 0, 00:03:55.584 "enable_zerocopy_send_server": true, 00:03:55.584 "enable_zerocopy_send_client": false, 00:03:55.584 "zerocopy_threshold": 0, 00:03:55.584 "tls_version": 0, 00:03:55.584 "enable_ktls": false 00:03:55.584 } 00:03:55.584 } 00:03:55.584 ] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "vmd", 00:03:55.584 "config": [] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "accel", 00:03:55.584 "config": [ 00:03:55.584 { 00:03:55.584 "method": "accel_set_options", 00:03:55.584 "params": { 00:03:55.584 "small_cache_size": 128, 00:03:55.584 "large_cache_size": 16, 00:03:55.584 "task_count": 2048, 00:03:55.584 "sequence_count": 2048, 00:03:55.584 "buf_count": 2048 00:03:55.584 } 00:03:55.584 } 00:03:55.584 ] 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "subsystem": "bdev", 00:03:55.584 "config": [ 00:03:55.584 { 00:03:55.584 "method": "bdev_set_options", 00:03:55.584 "params": { 00:03:55.584 "bdev_io_pool_size": 65535, 00:03:55.584 "bdev_io_cache_size": 256, 00:03:55.584 "bdev_auto_examine": true, 00:03:55.584 "iobuf_small_cache_size": 128, 00:03:55.584 "iobuf_large_cache_size": 16 00:03:55.584 } 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "method": "bdev_raid_set_options", 00:03:55.584 "params": { 00:03:55.584 "process_window_size_kb": 1024, 00:03:55.584 "process_max_bandwidth_mb_sec": 0 00:03:55.584 } 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "method": "bdev_iscsi_set_options", 00:03:55.584 "params": { 00:03:55.584 "timeout_sec": 30 00:03:55.584 } 00:03:55.584 }, 00:03:55.584 { 00:03:55.584 "method": "bdev_nvme_set_options", 00:03:55.584 "params": { 00:03:55.584 "action_on_timeout": "none", 00:03:55.584 "timeout_us": 0, 00:03:55.585 "timeout_admin_us": 0, 00:03:55.585 "keep_alive_timeout_ms": 10000, 00:03:55.585 "arbitration_burst": 0, 00:03:55.585 "low_priority_weight": 0, 00:03:55.585 "medium_priority_weight": 0, 00:03:55.585 "high_priority_weight": 0, 00:03:55.585 "nvme_adminq_poll_period_us": 10000, 00:03:55.585 "nvme_ioq_poll_period_us": 0, 00:03:55.585 "io_queue_requests": 0, 00:03:55.585 "delay_cmd_submit": true, 00:03:55.585 "transport_retry_count": 4, 00:03:55.585 "bdev_retry_count": 3, 00:03:55.585 "transport_ack_timeout": 0, 00:03:55.585 "ctrlr_loss_timeout_sec": 0, 00:03:55.585 "reconnect_delay_sec": 0, 00:03:55.585 "fast_io_fail_timeout_sec": 0, 00:03:55.585 "disable_auto_failback": false, 00:03:55.585 "generate_uuids": false, 00:03:55.585 "transport_tos": 0, 00:03:55.585 "nvme_error_stat": false, 00:03:55.585 "rdma_srq_size": 0, 00:03:55.585 "io_path_stat": false, 00:03:55.585 "allow_accel_sequence": false, 00:03:55.585 "rdma_max_cq_size": 0, 00:03:55.585 "rdma_cm_event_timeout_ms": 0, 00:03:55.585 "dhchap_digests": [ 00:03:55.585 "sha256", 00:03:55.585 "sha384", 00:03:55.585 "sha512" 00:03:55.585 ], 00:03:55.585 "dhchap_dhgroups": [ 00:03:55.585 "null", 00:03:55.585 "ffdhe2048", 00:03:55.585 "ffdhe3072", 00:03:55.585 "ffdhe4096", 00:03:55.585 "ffdhe6144", 00:03:55.585 "ffdhe8192" 00:03:55.585 ] 00:03:55.585 } 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "method": "bdev_nvme_set_hotplug", 00:03:55.585 "params": { 00:03:55.585 "period_us": 100000, 00:03:55.585 "enable": false 00:03:55.585 } 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "method": "bdev_wait_for_examine" 00:03:55.585 } 00:03:55.585 ] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "scsi", 00:03:55.585 "config": null 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "scheduler", 00:03:55.585 "config": [ 00:03:55.585 { 00:03:55.585 "method": "framework_set_scheduler", 00:03:55.585 "params": { 00:03:55.585 "name": "static" 00:03:55.585 } 00:03:55.585 } 00:03:55.585 ] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "vhost_scsi", 00:03:55.585 "config": [] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "vhost_blk", 00:03:55.585 "config": [] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "ublk", 00:03:55.585 "config": [] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "nbd", 00:03:55.585 "config": [] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "nvmf", 00:03:55.585 "config": [ 00:03:55.585 { 00:03:55.585 "method": "nvmf_set_config", 00:03:55.585 "params": { 00:03:55.585 "discovery_filter": "match_any", 00:03:55.585 "admin_cmd_passthru": { 00:03:55.585 "identify_ctrlr": false 00:03:55.585 }, 00:03:55.585 "dhchap_digests": [ 00:03:55.585 "sha256", 00:03:55.585 "sha384", 00:03:55.585 "sha512" 00:03:55.585 ], 00:03:55.585 "dhchap_dhgroups": [ 00:03:55.585 "null", 00:03:55.585 "ffdhe2048", 00:03:55.585 "ffdhe3072", 00:03:55.585 "ffdhe4096", 00:03:55.585 "ffdhe6144", 00:03:55.585 "ffdhe8192" 00:03:55.585 ] 00:03:55.585 } 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "method": "nvmf_set_max_subsystems", 00:03:55.585 "params": { 00:03:55.585 "max_subsystems": 1024 00:03:55.585 } 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "method": "nvmf_set_crdt", 00:03:55.585 "params": { 00:03:55.585 "crdt1": 0, 00:03:55.585 "crdt2": 0, 00:03:55.585 "crdt3": 0 00:03:55.585 } 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "method": "nvmf_create_transport", 00:03:55.585 "params": { 00:03:55.585 "trtype": "TCP", 00:03:55.585 "max_queue_depth": 128, 00:03:55.585 "max_io_qpairs_per_ctrlr": 127, 00:03:55.585 "in_capsule_data_size": 4096, 00:03:55.585 "max_io_size": 131072, 00:03:55.585 "io_unit_size": 131072, 00:03:55.585 "max_aq_depth": 128, 00:03:55.585 "num_shared_buffers": 511, 00:03:55.585 "buf_cache_size": 4294967295, 00:03:55.585 "dif_insert_or_strip": false, 00:03:55.585 "zcopy": false, 00:03:55.585 "c2h_success": true, 00:03:55.585 "sock_priority": 0, 00:03:55.585 "abort_timeout_sec": 1, 00:03:55.585 "ack_timeout": 0, 00:03:55.585 "data_wr_pool_size": 0 00:03:55.585 } 00:03:55.585 } 00:03:55.585 ] 00:03:55.585 }, 00:03:55.585 { 00:03:55.585 "subsystem": "iscsi", 00:03:55.585 "config": [ 00:03:55.585 { 00:03:55.585 "method": "iscsi_set_options", 00:03:55.585 "params": { 00:03:55.585 "node_base": "iqn.2016-06.io.spdk", 00:03:55.585 "max_sessions": 128, 00:03:55.585 "max_connections_per_session": 2, 00:03:55.585 "max_queue_depth": 64, 00:03:55.585 "default_time2wait": 2, 00:03:55.585 "default_time2retain": 20, 00:03:55.585 "first_burst_length": 8192, 00:03:55.585 "immediate_data": true, 00:03:55.585 "allow_duplicated_isid": false, 00:03:55.585 "error_recovery_level": 0, 00:03:55.585 "nop_timeout": 60, 00:03:55.585 "nop_in_interval": 30, 00:03:55.585 "disable_chap": false, 00:03:55.585 "require_chap": false, 00:03:55.585 "mutual_chap": false, 00:03:55.585 "chap_group": 0, 00:03:55.585 "max_large_datain_per_connection": 64, 00:03:55.585 "max_r2t_per_connection": 4, 00:03:55.585 "pdu_pool_size": 36864, 00:03:55.585 "immediate_data_pool_size": 16384, 00:03:55.585 "data_out_pool_size": 2048 00:03:55.585 } 00:03:55.585 } 00:03:55.585 ] 00:03:55.585 } 00:03:55.585 ] 00:03:55.585 } 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2299430 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2299430 ']' 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2299430 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299430 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299430' 00:03:55.585 killing process with pid 2299430 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2299430 00:03:55.585 16:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2299430 00:03:56.153 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2299669 00:03:56.153 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.153 16:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2299669 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2299669 ']' 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2299669 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299669 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299669' 00:04:01.424 killing process with pid 2299669 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2299669 00:04:01.424 16:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2299669 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.424 00:04:01.424 real 0m6.791s 00:04:01.424 user 0m6.638s 00:04:01.424 sys 0m0.638s 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.424 ************************************ 00:04:01.424 END TEST skip_rpc_with_json 00:04:01.424 ************************************ 00:04:01.424 16:58:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.424 ************************************ 00:04:01.424 START TEST skip_rpc_with_delay 00:04:01.424 ************************************ 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.424 [2024-11-20 16:58:19.409029] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:01.424 00:04:01.424 real 0m0.069s 00:04:01.424 user 0m0.047s 00:04:01.424 sys 0m0.022s 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.424 16:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:01.424 ************************************ 00:04:01.424 END TEST skip_rpc_with_delay 00:04:01.424 ************************************ 00:04:01.424 16:58:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:01.424 16:58:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:01.424 16:58:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.424 16:58:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.683 ************************************ 00:04:01.683 START TEST exit_on_failed_rpc_init 00:04:01.683 ************************************ 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2300640 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2300640 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2300640 ']' 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.683 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.683 [2024-11-20 16:58:19.544915] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:01.683 [2024-11-20 16:58:19.544961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300640 ] 00:04:01.683 [2024-11-20 16:58:19.619797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.683 [2024-11-20 16:58:19.662569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.942 16:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.942 [2024-11-20 16:58:19.934991] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:01.942 [2024-11-20 16:58:19.935036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300659 ] 00:04:02.200 [2024-11-20 16:58:20.005744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.200 [2024-11-20 16:58:20.054121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.200 [2024-11-20 16:58:20.054178] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:02.200 [2024-11-20 16:58:20.054187] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:02.200 [2024-11-20 16:58:20.054196] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:02.200 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:02.200 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:02.200 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2300640 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2300640 ']' 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2300640 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300640 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300640' 00:04:02.201 killing process with pid 2300640 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2300640 00:04:02.201 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2300640 00:04:02.459 00:04:02.459 real 0m0.968s 00:04:02.459 user 0m1.025s 00:04:02.459 sys 0m0.403s 00:04:02.459 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.459 16:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.459 ************************************ 00:04:02.459 END TEST exit_on_failed_rpc_init 00:04:02.459 ************************************ 00:04:02.459 16:58:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.718 00:04:02.718 real 0m13.651s 00:04:02.718 user 0m13.034s 00:04:02.718 sys 0m1.632s 00:04:02.718 16:58:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.718 16:58:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.718 ************************************ 00:04:02.718 END TEST skip_rpc 00:04:02.718 ************************************ 00:04:02.718 16:58:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.718 16:58:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.718 16:58:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.718 16:58:20 -- common/autotest_common.sh@10 -- # set +x 00:04:02.718 ************************************ 00:04:02.718 START TEST rpc_client 00:04:02.718 ************************************ 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.718 * Looking for test storage... 00:04:02.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.718 16:58:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.718 --rc genhtml_branch_coverage=1 00:04:02.718 --rc genhtml_function_coverage=1 00:04:02.718 --rc genhtml_legend=1 00:04:02.718 --rc geninfo_all_blocks=1 00:04:02.718 --rc geninfo_unexecuted_blocks=1 00:04:02.718 00:04:02.718 ' 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.718 --rc genhtml_branch_coverage=1 00:04:02.718 --rc genhtml_function_coverage=1 00:04:02.718 --rc genhtml_legend=1 00:04:02.718 --rc geninfo_all_blocks=1 00:04:02.718 --rc geninfo_unexecuted_blocks=1 00:04:02.718 00:04:02.718 ' 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.718 --rc genhtml_branch_coverage=1 00:04:02.718 --rc genhtml_function_coverage=1 00:04:02.718 --rc genhtml_legend=1 00:04:02.718 --rc geninfo_all_blocks=1 00:04:02.718 --rc geninfo_unexecuted_blocks=1 00:04:02.718 00:04:02.718 ' 00:04:02.718 16:58:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.718 --rc genhtml_branch_coverage=1 00:04:02.718 --rc genhtml_function_coverage=1 00:04:02.718 --rc genhtml_legend=1 00:04:02.718 --rc geninfo_all_blocks=1 00:04:02.718 --rc geninfo_unexecuted_blocks=1 00:04:02.718 00:04:02.718 ' 00:04:02.718 16:58:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:02.977 OK 00:04:02.977 16:58:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:02.977 00:04:02.977 real 0m0.197s 00:04:02.977 user 0m0.121s 00:04:02.977 sys 0m0.090s 00:04:02.977 16:58:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.977 16:58:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:02.977 ************************************ 00:04:02.977 END TEST rpc_client 00:04:02.977 ************************************ 00:04:02.977 16:58:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.977 16:58:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.977 16:58:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.977 16:58:20 -- common/autotest_common.sh@10 -- # set +x 00:04:02.977 ************************************ 00:04:02.977 START TEST json_config 00:04:02.977 ************************************ 00:04:02.977 16:58:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.977 16:58:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.977 16:58:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.977 16:58:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.977 16:58:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.977 16:58:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.977 16:58:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.977 16:58:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.977 16:58:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.977 16:58:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.977 16:58:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.977 16:58:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.977 16:58:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.978 16:58:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.978 16:58:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.978 16:58:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:02.978 16:58:20 json_config -- scripts/common.sh@345 -- # : 1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.978 16:58:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.978 16:58:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@353 -- # local d=1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.978 16:58:20 json_config -- scripts/common.sh@355 -- # echo 1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.978 16:58:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:02.978 16:58:20 json_config -- scripts/common.sh@353 -- # local d=2 00:04:02.978 16:58:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.978 16:58:20 json_config -- scripts/common.sh@355 -- # echo 2 00:04:02.978 16:58:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.978 16:58:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.978 16:58:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.978 16:58:20 json_config -- scripts/common.sh@368 -- # return 0 00:04:02.978 16:58:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.978 16:58:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.978 --rc genhtml_branch_coverage=1 00:04:02.978 --rc genhtml_function_coverage=1 00:04:02.978 --rc genhtml_legend=1 00:04:02.978 --rc geninfo_all_blocks=1 00:04:02.978 --rc geninfo_unexecuted_blocks=1 00:04:02.978 00:04:02.978 ' 00:04:02.978 16:58:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.978 --rc genhtml_branch_coverage=1 00:04:02.978 --rc genhtml_function_coverage=1 00:04:02.978 --rc genhtml_legend=1 00:04:02.978 --rc geninfo_all_blocks=1 00:04:02.978 --rc geninfo_unexecuted_blocks=1 00:04:02.978 00:04:02.978 ' 00:04:02.978 16:58:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.978 --rc genhtml_branch_coverage=1 00:04:02.978 --rc genhtml_function_coverage=1 00:04:02.978 --rc genhtml_legend=1 00:04:02.978 --rc geninfo_all_blocks=1 00:04:02.978 --rc geninfo_unexecuted_blocks=1 00:04:02.978 00:04:02.978 ' 00:04:02.978 16:58:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.978 --rc genhtml_branch_coverage=1 00:04:02.978 --rc genhtml_function_coverage=1 00:04:02.978 --rc genhtml_legend=1 00:04:02.978 --rc geninfo_all_blocks=1 00:04:02.978 --rc geninfo_unexecuted_blocks=1 00:04:02.978 00:04:02.978 ' 00:04:02.978 16:58:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.978 16:58:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.978 16:58:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.978 16:58:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.978 16:58:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.978 16:58:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.978 16:58:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.978 16:58:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.978 16:58:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.978 16:58:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:02.978 16:58:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.978 16:58:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:03.237 16:58:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:03.238 INFO: JSON configuration test init 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.238 16:58:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:03.238 16:58:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.238 16:58:21 json_config -- json_config/common.sh@10 -- # shift 00:04:03.238 16:58:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.238 16:58:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.238 16:58:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.238 16:58:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.238 16:58:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.238 16:58:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2301011 00:04:03.238 16:58:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.238 Waiting for target to run... 00:04:03.238 16:58:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2301011 /var/tmp/spdk_tgt.sock 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 2301011 ']' 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.238 16:58:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.238 16:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.238 [2024-11-20 16:58:21.086750] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:03.238 [2024-11-20 16:58:21.086798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301011 ] 00:04:03.805 [2024-11-20 16:58:21.541436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.805 [2024-11-20 16:58:21.594182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:04.063 16:58:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.063 00:04:04.063 16:58:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:04.063 16:58:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.063 16:58:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:04.063 16:58:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.063 16:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.063 16:58:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:04.064 16:58:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:04.064 16:58:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:07.351 16:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@54 -- # sort 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.351 16:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:07.351 16:58:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.351 16:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:07.610 MallocForNvmf0 00:04:07.610 16:58:25 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.610 16:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.869 MallocForNvmf1 00:04:07.869 16:58:25 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.869 16:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.869 [2024-11-20 16:58:25.882220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.127 16:58:25 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.127 16:58:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:08.127 16:58:26 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.127 16:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:08.385 16:58:26 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.385 16:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:08.643 16:58:26 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.643 16:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.643 [2024-11-20 16:58:26.660638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.902 16:58:26 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:08.902 16:58:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.902 16:58:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.902 16:58:26 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:08.902 16:58:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.902 16:58:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.902 16:58:26 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:08.902 16:58:26 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.902 16:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.902 MallocBdevForConfigChangeCheck 00:04:09.160 16:58:26 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:09.160 16:58:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.160 16:58:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.160 16:58:26 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:09.160 16:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.418 16:58:27 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:09.418 INFO: shutting down applications... 00:04:09.418 16:58:27 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:09.418 16:58:27 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:09.418 16:58:27 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:09.418 16:58:27 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:11.950 Calling clear_iscsi_subsystem 00:04:11.950 Calling clear_nvmf_subsystem 00:04:11.950 Calling clear_nbd_subsystem 00:04:11.950 Calling clear_ublk_subsystem 00:04:11.950 Calling clear_vhost_blk_subsystem 00:04:11.950 Calling clear_vhost_scsi_subsystem 00:04:11.950 Calling clear_bdev_subsystem 00:04:11.950 16:58:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:11.950 16:58:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:11.950 16:58:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@352 -- # break 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:11.951 16:58:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:11.951 16:58:29 json_config -- json_config/common.sh@31 -- # local app=target 00:04:11.951 16:58:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.951 16:58:29 json_config -- json_config/common.sh@35 -- # [[ -n 2301011 ]] 00:04:11.951 16:58:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2301011 00:04:11.951 16:58:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.951 16:58:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.951 16:58:29 json_config -- json_config/common.sh@41 -- # kill -0 2301011 00:04:11.951 16:58:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.518 16:58:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.518 16:58:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.518 16:58:30 json_config -- json_config/common.sh@41 -- # kill -0 2301011 00:04:12.518 16:58:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.518 16:58:30 json_config -- json_config/common.sh@43 -- # break 00:04:12.518 16:58:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.518 16:58:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.518 SPDK target shutdown done 00:04:12.518 16:58:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:12.518 INFO: relaunching applications... 00:04:12.518 16:58:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.518 16:58:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.518 16:58:30 json_config -- json_config/common.sh@10 -- # shift 00:04:12.518 16:58:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.518 16:58:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.518 16:58:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.518 16:58:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.518 16:58:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.518 16:58:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2302745 00:04:12.518 16:58:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.518 Waiting for target to run... 00:04:12.518 16:58:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.518 16:58:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2302745 /var/tmp/spdk_tgt.sock 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 2302745 ']' 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.518 16:58:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.518 [2024-11-20 16:58:30.336954] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:12.518 [2024-11-20 16:58:30.337021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302745 ] 00:04:12.776 [2024-11-20 16:58:30.796401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.034 [2024-11-20 16:58:30.852709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.316 [2024-11-20 16:58:33.880134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.316 [2024-11-20 16:58:33.912498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:16.574 16:58:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.574 16:58:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:16.574 16:58:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.574 00:04:16.574 16:58:34 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:16.574 16:58:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:16.574 INFO: Checking if target configuration is the same... 00:04:16.574 16:58:34 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:16.574 16:58:34 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.574 16:58:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.574 + '[' 2 -ne 2 ']' 00:04:16.574 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.574 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.574 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.574 +++ basename /dev/fd/62 00:04:16.574 ++ mktemp /tmp/62.XXX 00:04:16.574 + tmp_file_1=/tmp/62.jpd 00:04:16.574 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.574 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.574 + tmp_file_2=/tmp/spdk_tgt_config.json.P0S 00:04:16.574 + ret=0 00:04:16.574 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.141 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.141 + diff -u /tmp/62.jpd /tmp/spdk_tgt_config.json.P0S 00:04:17.141 + echo 'INFO: JSON config files are the same' 00:04:17.141 INFO: JSON config files are the same 00:04:17.141 + rm /tmp/62.jpd /tmp/spdk_tgt_config.json.P0S 00:04:17.141 + exit 0 00:04:17.141 16:58:34 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:17.141 16:58:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:17.141 INFO: changing configuration and checking if this can be detected... 00:04:17.141 16:58:34 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.141 16:58:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:17.141 16:58:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.141 16:58:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:17.141 16:58:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.141 + '[' 2 -ne 2 ']' 00:04:17.141 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:17.399 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:17.399 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:17.399 +++ basename /dev/fd/62 00:04:17.399 ++ mktemp /tmp/62.XXX 00:04:17.399 + tmp_file_1=/tmp/62.Gop 00:04:17.399 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.399 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:17.399 + tmp_file_2=/tmp/spdk_tgt_config.json.OUz 00:04:17.399 + ret=0 00:04:17.399 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.658 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:17.658 + diff -u /tmp/62.Gop /tmp/spdk_tgt_config.json.OUz 00:04:17.658 + ret=1 00:04:17.658 + echo '=== Start of file: /tmp/62.Gop ===' 00:04:17.658 + cat /tmp/62.Gop 00:04:17.658 + echo '=== End of file: /tmp/62.Gop ===' 00:04:17.658 + echo '' 00:04:17.658 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OUz ===' 00:04:17.658 + cat /tmp/spdk_tgt_config.json.OUz 00:04:17.658 + echo '=== End of file: /tmp/spdk_tgt_config.json.OUz ===' 00:04:17.658 + echo '' 00:04:17.658 + rm /tmp/62.Gop /tmp/spdk_tgt_config.json.OUz 00:04:17.658 + exit 1 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:17.658 INFO: configuration change detected. 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 2302745 ]] 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.658 16:58:35 json_config -- json_config/json_config.sh@330 -- # killprocess 2302745 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 2302745 ']' 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@958 -- # kill -0 2302745 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@959 -- # uname 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2302745 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2302745' 00:04:17.658 killing process with pid 2302745 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@973 -- # kill 2302745 00:04:17.658 16:58:35 json_config -- common/autotest_common.sh@978 -- # wait 2302745 00:04:20.186 16:58:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.186 16:58:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:20.186 16:58:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.186 16:58:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.186 16:58:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:20.186 16:58:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:20.186 INFO: Success 00:04:20.186 00:04:20.186 real 0m16.989s 00:04:20.186 user 0m17.415s 00:04:20.186 sys 0m2.802s 00:04:20.187 16:58:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.187 16:58:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.187 ************************************ 00:04:20.187 END TEST json_config 00:04:20.187 ************************************ 00:04:20.187 16:58:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.187 16:58:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.187 16:58:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.187 16:58:37 -- common/autotest_common.sh@10 -- # set +x 00:04:20.187 ************************************ 00:04:20.187 START TEST json_config_extra_key 00:04:20.187 ************************************ 00:04:20.187 16:58:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.187 16:58:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.187 16:58:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.187 16:58:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.187 --rc genhtml_branch_coverage=1 00:04:20.187 --rc genhtml_function_coverage=1 00:04:20.187 --rc genhtml_legend=1 00:04:20.187 --rc geninfo_all_blocks=1 00:04:20.187 --rc geninfo_unexecuted_blocks=1 00:04:20.187 00:04:20.187 ' 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.187 --rc genhtml_branch_coverage=1 00:04:20.187 --rc genhtml_function_coverage=1 00:04:20.187 --rc genhtml_legend=1 00:04:20.187 --rc geninfo_all_blocks=1 00:04:20.187 --rc geninfo_unexecuted_blocks=1 00:04:20.187 00:04:20.187 ' 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.187 --rc genhtml_branch_coverage=1 00:04:20.187 --rc genhtml_function_coverage=1 00:04:20.187 --rc genhtml_legend=1 00:04:20.187 --rc geninfo_all_blocks=1 00:04:20.187 --rc geninfo_unexecuted_blocks=1 00:04:20.187 00:04:20.187 ' 00:04:20.187 16:58:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.187 --rc genhtml_branch_coverage=1 00:04:20.187 --rc genhtml_function_coverage=1 00:04:20.187 --rc genhtml_legend=1 00:04:20.187 --rc geninfo_all_blocks=1 00:04:20.187 --rc geninfo_unexecuted_blocks=1 00:04:20.187 00:04:20.187 ' 00:04:20.187 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.187 16:58:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.187 16:58:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.187 16:58:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.187 16:58:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.187 16:58:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:20.187 16:58:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.187 16:58:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.188 16:58:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:20.188 INFO: launching applications... 00:04:20.188 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2304114 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.188 Waiting for target to run... 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2304114 /var/tmp/spdk_tgt.sock 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2304114 ']' 00:04:20.188 16:58:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.188 16:58:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.188 [2024-11-20 16:58:38.138289] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:20.188 [2024-11-20 16:58:38.138339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304114 ] 00:04:20.445 [2024-11-20 16:58:38.428126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.445 [2024-11-20 16:58:38.461791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.012 16:58:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.012 16:58:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:21.012 00:04:21.012 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:21.012 INFO: shutting down applications... 00:04:21.012 16:58:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2304114 ]] 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2304114 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2304114 00:04:21.012 16:58:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2304114 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.581 16:58:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.581 SPDK target shutdown done 00:04:21.581 16:58:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:21.581 Success 00:04:21.581 00:04:21.581 real 0m1.572s 00:04:21.581 user 0m1.346s 00:04:21.581 sys 0m0.399s 00:04:21.581 16:58:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.581 16:58:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.581 ************************************ 00:04:21.581 END TEST json_config_extra_key 00:04:21.581 ************************************ 00:04:21.581 16:58:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.581 16:58:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.581 16:58:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.581 16:58:39 -- common/autotest_common.sh@10 -- # set +x 00:04:21.581 ************************************ 00:04:21.581 START TEST alias_rpc 00:04:21.581 ************************************ 00:04:21.581 16:58:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.840 * Looking for test storage... 00:04:21.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.840 16:58:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.840 --rc genhtml_branch_coverage=1 00:04:21.840 --rc genhtml_function_coverage=1 00:04:21.840 --rc genhtml_legend=1 00:04:21.840 --rc geninfo_all_blocks=1 00:04:21.840 --rc geninfo_unexecuted_blocks=1 00:04:21.840 00:04:21.840 ' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.840 --rc genhtml_branch_coverage=1 00:04:21.840 --rc genhtml_function_coverage=1 00:04:21.840 --rc genhtml_legend=1 00:04:21.840 --rc geninfo_all_blocks=1 00:04:21.840 --rc geninfo_unexecuted_blocks=1 00:04:21.840 00:04:21.840 ' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.840 --rc genhtml_branch_coverage=1 00:04:21.840 --rc genhtml_function_coverage=1 00:04:21.840 --rc genhtml_legend=1 00:04:21.840 --rc geninfo_all_blocks=1 00:04:21.840 --rc geninfo_unexecuted_blocks=1 00:04:21.840 00:04:21.840 ' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.840 --rc genhtml_branch_coverage=1 00:04:21.840 --rc genhtml_function_coverage=1 00:04:21.840 --rc genhtml_legend=1 00:04:21.840 --rc geninfo_all_blocks=1 00:04:21.840 --rc geninfo_unexecuted_blocks=1 00:04:21.840 00:04:21.840 ' 00:04:21.840 16:58:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.840 16:58:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2304551 00:04:21.840 16:58:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.840 16:58:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2304551 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2304551 ']' 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.840 16:58:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.840 [2024-11-20 16:58:39.769759] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:21.840 [2024-11-20 16:58:39.769807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304551 ] 00:04:21.840 [2024-11-20 16:58:39.843878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.098 [2024-11-20 16:58:39.886705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.098 16:58:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.098 16:58:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.098 16:58:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:22.357 16:58:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2304551 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2304551 ']' 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2304551 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304551 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304551' 00:04:22.357 killing process with pid 2304551 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 2304551 00:04:22.357 16:58:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 2304551 00:04:22.926 00:04:22.926 real 0m1.132s 00:04:22.926 user 0m1.160s 00:04:22.926 sys 0m0.413s 00:04:22.926 16:58:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.926 16:58:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.926 ************************************ 00:04:22.926 END TEST alias_rpc 00:04:22.926 ************************************ 00:04:22.926 16:58:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:22.926 16:58:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.926 16:58:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.926 16:58:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.926 16:58:40 -- common/autotest_common.sh@10 -- # set +x 00:04:22.926 ************************************ 00:04:22.926 START TEST spdkcli_tcp 00:04:22.926 ************************************ 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:22.926 * Looking for test storage... 00:04:22.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.926 16:58:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.926 --rc genhtml_branch_coverage=1 00:04:22.926 --rc genhtml_function_coverage=1 00:04:22.926 --rc genhtml_legend=1 00:04:22.926 --rc geninfo_all_blocks=1 00:04:22.926 --rc geninfo_unexecuted_blocks=1 00:04:22.926 00:04:22.926 ' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.926 --rc genhtml_branch_coverage=1 00:04:22.926 --rc genhtml_function_coverage=1 00:04:22.926 --rc genhtml_legend=1 00:04:22.926 --rc geninfo_all_blocks=1 00:04:22.926 --rc geninfo_unexecuted_blocks=1 00:04:22.926 00:04:22.926 ' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.926 --rc genhtml_branch_coverage=1 00:04:22.926 --rc genhtml_function_coverage=1 00:04:22.926 --rc genhtml_legend=1 00:04:22.926 --rc geninfo_all_blocks=1 00:04:22.926 --rc geninfo_unexecuted_blocks=1 00:04:22.926 00:04:22.926 ' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.926 --rc genhtml_branch_coverage=1 00:04:22.926 --rc genhtml_function_coverage=1 00:04:22.926 --rc genhtml_legend=1 00:04:22.926 --rc geninfo_all_blocks=1 00:04:22.926 --rc geninfo_unexecuted_blocks=1 00:04:22.926 00:04:22.926 ' 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2304734 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2304734 00:04:22.926 16:58:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2304734 ']' 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.926 16:58:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.183 [2024-11-20 16:58:40.980758] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:23.183 [2024-11-20 16:58:40.980805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304734 ] 00:04:23.183 [2024-11-20 16:58:41.038415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.183 [2024-11-20 16:58:41.081394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.183 [2024-11-20 16:58:41.081397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.441 16:58:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.441 16:58:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:23.441 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2304853 00:04:23.441 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:23.441 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:23.700 [ 00:04:23.700 "bdev_malloc_delete", 00:04:23.700 "bdev_malloc_create", 00:04:23.700 "bdev_null_resize", 00:04:23.700 "bdev_null_delete", 00:04:23.700 "bdev_null_create", 00:04:23.700 "bdev_nvme_cuse_unregister", 00:04:23.700 "bdev_nvme_cuse_register", 00:04:23.700 "bdev_opal_new_user", 00:04:23.700 "bdev_opal_set_lock_state", 00:04:23.700 "bdev_opal_delete", 00:04:23.700 "bdev_opal_get_info", 00:04:23.700 "bdev_opal_create", 00:04:23.700 "bdev_nvme_opal_revert", 00:04:23.700 "bdev_nvme_opal_init", 00:04:23.700 "bdev_nvme_send_cmd", 00:04:23.700 "bdev_nvme_set_keys", 00:04:23.700 "bdev_nvme_get_path_iostat", 00:04:23.700 "bdev_nvme_get_mdns_discovery_info", 00:04:23.700 "bdev_nvme_stop_mdns_discovery", 00:04:23.700 "bdev_nvme_start_mdns_discovery", 00:04:23.700 "bdev_nvme_set_multipath_policy", 00:04:23.700 "bdev_nvme_set_preferred_path", 00:04:23.700 "bdev_nvme_get_io_paths", 00:04:23.700 "bdev_nvme_remove_error_injection", 00:04:23.700 "bdev_nvme_add_error_injection", 00:04:23.700 "bdev_nvme_get_discovery_info", 00:04:23.700 "bdev_nvme_stop_discovery", 00:04:23.700 "bdev_nvme_start_discovery", 00:04:23.700 "bdev_nvme_get_controller_health_info", 00:04:23.700 "bdev_nvme_disable_controller", 00:04:23.700 "bdev_nvme_enable_controller", 00:04:23.700 "bdev_nvme_reset_controller", 00:04:23.700 "bdev_nvme_get_transport_statistics", 00:04:23.700 "bdev_nvme_apply_firmware", 00:04:23.700 "bdev_nvme_detach_controller", 00:04:23.700 "bdev_nvme_get_controllers", 00:04:23.700 "bdev_nvme_attach_controller", 00:04:23.700 "bdev_nvme_set_hotplug", 00:04:23.700 "bdev_nvme_set_options", 00:04:23.700 "bdev_passthru_delete", 00:04:23.700 "bdev_passthru_create", 00:04:23.700 "bdev_lvol_set_parent_bdev", 00:04:23.700 "bdev_lvol_set_parent", 00:04:23.700 "bdev_lvol_check_shallow_copy", 00:04:23.700 "bdev_lvol_start_shallow_copy", 00:04:23.700 "bdev_lvol_grow_lvstore", 00:04:23.700 "bdev_lvol_get_lvols", 00:04:23.700 "bdev_lvol_get_lvstores", 00:04:23.700 "bdev_lvol_delete", 00:04:23.700 "bdev_lvol_set_read_only", 00:04:23.700 "bdev_lvol_resize", 00:04:23.700 "bdev_lvol_decouple_parent", 00:04:23.700 "bdev_lvol_inflate", 00:04:23.700 "bdev_lvol_rename", 00:04:23.700 "bdev_lvol_clone_bdev", 00:04:23.700 "bdev_lvol_clone", 00:04:23.700 "bdev_lvol_snapshot", 00:04:23.700 "bdev_lvol_create", 00:04:23.700 "bdev_lvol_delete_lvstore", 00:04:23.700 "bdev_lvol_rename_lvstore", 00:04:23.700 "bdev_lvol_create_lvstore", 00:04:23.700 "bdev_raid_set_options", 00:04:23.700 "bdev_raid_remove_base_bdev", 00:04:23.700 "bdev_raid_add_base_bdev", 00:04:23.700 "bdev_raid_delete", 00:04:23.700 "bdev_raid_create", 00:04:23.700 "bdev_raid_get_bdevs", 00:04:23.700 "bdev_error_inject_error", 00:04:23.700 "bdev_error_delete", 00:04:23.700 "bdev_error_create", 00:04:23.700 "bdev_split_delete", 00:04:23.700 "bdev_split_create", 00:04:23.700 "bdev_delay_delete", 00:04:23.700 "bdev_delay_create", 00:04:23.700 "bdev_delay_update_latency", 00:04:23.700 "bdev_zone_block_delete", 00:04:23.700 "bdev_zone_block_create", 00:04:23.700 "blobfs_create", 00:04:23.700 "blobfs_detect", 00:04:23.700 "blobfs_set_cache_size", 00:04:23.700 "bdev_aio_delete", 00:04:23.700 "bdev_aio_rescan", 00:04:23.700 "bdev_aio_create", 00:04:23.700 "bdev_ftl_set_property", 00:04:23.700 "bdev_ftl_get_properties", 00:04:23.700 "bdev_ftl_get_stats", 00:04:23.700 "bdev_ftl_unmap", 00:04:23.700 "bdev_ftl_unload", 00:04:23.700 "bdev_ftl_delete", 00:04:23.700 "bdev_ftl_load", 00:04:23.700 "bdev_ftl_create", 00:04:23.700 "bdev_virtio_attach_controller", 00:04:23.700 "bdev_virtio_scsi_get_devices", 00:04:23.700 "bdev_virtio_detach_controller", 00:04:23.700 "bdev_virtio_blk_set_hotplug", 00:04:23.700 "bdev_iscsi_delete", 00:04:23.700 "bdev_iscsi_create", 00:04:23.700 "bdev_iscsi_set_options", 00:04:23.700 "accel_error_inject_error", 00:04:23.700 "ioat_scan_accel_module", 00:04:23.700 "dsa_scan_accel_module", 00:04:23.700 "iaa_scan_accel_module", 00:04:23.700 "vfu_virtio_create_fs_endpoint", 00:04:23.700 "vfu_virtio_create_scsi_endpoint", 00:04:23.700 "vfu_virtio_scsi_remove_target", 00:04:23.700 "vfu_virtio_scsi_add_target", 00:04:23.700 "vfu_virtio_create_blk_endpoint", 00:04:23.700 "vfu_virtio_delete_endpoint", 00:04:23.700 "keyring_file_remove_key", 00:04:23.700 "keyring_file_add_key", 00:04:23.700 "keyring_linux_set_options", 00:04:23.700 "fsdev_aio_delete", 00:04:23.700 "fsdev_aio_create", 00:04:23.700 "iscsi_get_histogram", 00:04:23.700 "iscsi_enable_histogram", 00:04:23.700 "iscsi_set_options", 00:04:23.700 "iscsi_get_auth_groups", 00:04:23.700 "iscsi_auth_group_remove_secret", 00:04:23.700 "iscsi_auth_group_add_secret", 00:04:23.700 "iscsi_delete_auth_group", 00:04:23.700 "iscsi_create_auth_group", 00:04:23.700 "iscsi_set_discovery_auth", 00:04:23.700 "iscsi_get_options", 00:04:23.700 "iscsi_target_node_request_logout", 00:04:23.700 "iscsi_target_node_set_redirect", 00:04:23.700 "iscsi_target_node_set_auth", 00:04:23.700 "iscsi_target_node_add_lun", 00:04:23.700 "iscsi_get_stats", 00:04:23.700 "iscsi_get_connections", 00:04:23.700 "iscsi_portal_group_set_auth", 00:04:23.700 "iscsi_start_portal_group", 00:04:23.700 "iscsi_delete_portal_group", 00:04:23.700 "iscsi_create_portal_group", 00:04:23.700 "iscsi_get_portal_groups", 00:04:23.700 "iscsi_delete_target_node", 00:04:23.700 "iscsi_target_node_remove_pg_ig_maps", 00:04:23.700 "iscsi_target_node_add_pg_ig_maps", 00:04:23.700 "iscsi_create_target_node", 00:04:23.700 "iscsi_get_target_nodes", 00:04:23.700 "iscsi_delete_initiator_group", 00:04:23.700 "iscsi_initiator_group_remove_initiators", 00:04:23.700 "iscsi_initiator_group_add_initiators", 00:04:23.700 "iscsi_create_initiator_group", 00:04:23.700 "iscsi_get_initiator_groups", 00:04:23.701 "nvmf_set_crdt", 00:04:23.701 "nvmf_set_config", 00:04:23.701 "nvmf_set_max_subsystems", 00:04:23.701 "nvmf_stop_mdns_prr", 00:04:23.701 "nvmf_publish_mdns_prr", 00:04:23.701 "nvmf_subsystem_get_listeners", 00:04:23.701 "nvmf_subsystem_get_qpairs", 00:04:23.701 "nvmf_subsystem_get_controllers", 00:04:23.701 "nvmf_get_stats", 00:04:23.701 "nvmf_get_transports", 00:04:23.701 "nvmf_create_transport", 00:04:23.701 "nvmf_get_targets", 00:04:23.701 "nvmf_delete_target", 00:04:23.701 "nvmf_create_target", 00:04:23.701 "nvmf_subsystem_allow_any_host", 00:04:23.701 "nvmf_subsystem_set_keys", 00:04:23.701 "nvmf_subsystem_remove_host", 00:04:23.701 "nvmf_subsystem_add_host", 00:04:23.701 "nvmf_ns_remove_host", 00:04:23.701 "nvmf_ns_add_host", 00:04:23.701 "nvmf_subsystem_remove_ns", 00:04:23.701 "nvmf_subsystem_set_ns_ana_group", 00:04:23.701 "nvmf_subsystem_add_ns", 00:04:23.701 "nvmf_subsystem_listener_set_ana_state", 00:04:23.701 "nvmf_discovery_get_referrals", 00:04:23.701 "nvmf_discovery_remove_referral", 00:04:23.701 "nvmf_discovery_add_referral", 00:04:23.701 "nvmf_subsystem_remove_listener", 00:04:23.701 "nvmf_subsystem_add_listener", 00:04:23.701 "nvmf_delete_subsystem", 00:04:23.701 "nvmf_create_subsystem", 00:04:23.701 "nvmf_get_subsystems", 00:04:23.701 "env_dpdk_get_mem_stats", 00:04:23.701 "nbd_get_disks", 00:04:23.701 "nbd_stop_disk", 00:04:23.701 "nbd_start_disk", 00:04:23.701 "ublk_recover_disk", 00:04:23.701 "ublk_get_disks", 00:04:23.701 "ublk_stop_disk", 00:04:23.701 "ublk_start_disk", 00:04:23.701 "ublk_destroy_target", 00:04:23.701 "ublk_create_target", 00:04:23.701 "virtio_blk_create_transport", 00:04:23.701 "virtio_blk_get_transports", 00:04:23.701 "vhost_controller_set_coalescing", 00:04:23.701 "vhost_get_controllers", 00:04:23.701 "vhost_delete_controller", 00:04:23.701 "vhost_create_blk_controller", 00:04:23.701 "vhost_scsi_controller_remove_target", 00:04:23.701 "vhost_scsi_controller_add_target", 00:04:23.701 "vhost_start_scsi_controller", 00:04:23.701 "vhost_create_scsi_controller", 00:04:23.701 "thread_set_cpumask", 00:04:23.701 "scheduler_set_options", 00:04:23.701 "framework_get_governor", 00:04:23.701 "framework_get_scheduler", 00:04:23.701 "framework_set_scheduler", 00:04:23.701 "framework_get_reactors", 00:04:23.701 "thread_get_io_channels", 00:04:23.701 "thread_get_pollers", 00:04:23.701 "thread_get_stats", 00:04:23.701 "framework_monitor_context_switch", 00:04:23.701 "spdk_kill_instance", 00:04:23.701 "log_enable_timestamps", 00:04:23.701 "log_get_flags", 00:04:23.701 "log_clear_flag", 00:04:23.701 "log_set_flag", 00:04:23.701 "log_get_level", 00:04:23.701 "log_set_level", 00:04:23.701 "log_get_print_level", 00:04:23.701 "log_set_print_level", 00:04:23.701 "framework_enable_cpumask_locks", 00:04:23.701 "framework_disable_cpumask_locks", 00:04:23.701 "framework_wait_init", 00:04:23.701 "framework_start_init", 00:04:23.701 "scsi_get_devices", 00:04:23.701 "bdev_get_histogram", 00:04:23.701 "bdev_enable_histogram", 00:04:23.701 "bdev_set_qos_limit", 00:04:23.701 "bdev_set_qd_sampling_period", 00:04:23.701 "bdev_get_bdevs", 00:04:23.701 "bdev_reset_iostat", 00:04:23.701 "bdev_get_iostat", 00:04:23.701 "bdev_examine", 00:04:23.701 "bdev_wait_for_examine", 00:04:23.701 "bdev_set_options", 00:04:23.701 "accel_get_stats", 00:04:23.701 "accel_set_options", 00:04:23.701 "accel_set_driver", 00:04:23.701 "accel_crypto_key_destroy", 00:04:23.701 "accel_crypto_keys_get", 00:04:23.701 "accel_crypto_key_create", 00:04:23.701 "accel_assign_opc", 00:04:23.701 "accel_get_module_info", 00:04:23.701 "accel_get_opc_assignments", 00:04:23.701 "vmd_rescan", 00:04:23.701 "vmd_remove_device", 00:04:23.701 "vmd_enable", 00:04:23.701 "sock_get_default_impl", 00:04:23.701 "sock_set_default_impl", 00:04:23.701 "sock_impl_set_options", 00:04:23.701 "sock_impl_get_options", 00:04:23.701 "iobuf_get_stats", 00:04:23.701 "iobuf_set_options", 00:04:23.701 "keyring_get_keys", 00:04:23.701 "vfu_tgt_set_base_path", 00:04:23.701 "framework_get_pci_devices", 00:04:23.701 "framework_get_config", 00:04:23.701 "framework_get_subsystems", 00:04:23.701 "fsdev_set_opts", 00:04:23.701 "fsdev_get_opts", 00:04:23.701 "trace_get_info", 00:04:23.701 "trace_get_tpoint_group_mask", 00:04:23.701 "trace_disable_tpoint_group", 00:04:23.701 "trace_enable_tpoint_group", 00:04:23.701 "trace_clear_tpoint_mask", 00:04:23.701 "trace_set_tpoint_mask", 00:04:23.701 "notify_get_notifications", 00:04:23.701 "notify_get_types", 00:04:23.701 "spdk_get_version", 00:04:23.701 "rpc_get_methods" 00:04:23.701 ] 00:04:23.701 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.701 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:23.701 16:58:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2304734 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2304734 ']' 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2304734 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304734 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304734' 00:04:23.701 killing process with pid 2304734 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2304734 00:04:23.701 16:58:41 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2304734 00:04:23.960 00:04:23.960 real 0m1.139s 00:04:23.961 user 0m1.948s 00:04:23.961 sys 0m0.428s 00:04:23.961 16:58:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.961 16:58:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.961 ************************************ 00:04:23.961 END TEST spdkcli_tcp 00:04:23.961 ************************************ 00:04:23.961 16:58:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:23.961 16:58:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.961 16:58:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.961 16:58:41 -- common/autotest_common.sh@10 -- # set +x 00:04:23.961 ************************************ 00:04:23.961 START TEST dpdk_mem_utility 00:04:23.961 ************************************ 00:04:23.961 16:58:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.219 * Looking for test storage... 00:04:24.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.220 16:58:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.220 --rc genhtml_branch_coverage=1 00:04:24.220 --rc genhtml_function_coverage=1 00:04:24.220 --rc genhtml_legend=1 00:04:24.220 --rc geninfo_all_blocks=1 00:04:24.220 --rc geninfo_unexecuted_blocks=1 00:04:24.220 00:04:24.220 ' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.220 --rc genhtml_branch_coverage=1 00:04:24.220 --rc genhtml_function_coverage=1 00:04:24.220 --rc genhtml_legend=1 00:04:24.220 --rc geninfo_all_blocks=1 00:04:24.220 --rc geninfo_unexecuted_blocks=1 00:04:24.220 00:04:24.220 ' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.220 --rc genhtml_branch_coverage=1 00:04:24.220 --rc genhtml_function_coverage=1 00:04:24.220 --rc genhtml_legend=1 00:04:24.220 --rc geninfo_all_blocks=1 00:04:24.220 --rc geninfo_unexecuted_blocks=1 00:04:24.220 00:04:24.220 ' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.220 --rc genhtml_branch_coverage=1 00:04:24.220 --rc genhtml_function_coverage=1 00:04:24.220 --rc genhtml_legend=1 00:04:24.220 --rc geninfo_all_blocks=1 00:04:24.220 --rc geninfo_unexecuted_blocks=1 00:04:24.220 00:04:24.220 ' 00:04:24.220 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.220 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2304940 00:04:24.220 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.220 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2304940 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2304940 ']' 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.220 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.220 [2024-11-20 16:58:42.173704] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:24.220 [2024-11-20 16:58:42.173757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304940 ] 00:04:24.220 [2024-11-20 16:58:42.250684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.478 [2024-11-20 16:58:42.292032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.478 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.478 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:24.478 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:24.478 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:24.478 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.478 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.737 { 00:04:24.737 "filename": "/tmp/spdk_mem_dump.txt" 00:04:24.737 } 00:04:24.738 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.738 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:24.738 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:24.738 1 heaps totaling size 818.000000 MiB 00:04:24.738 size: 818.000000 MiB heap id: 0 00:04:24.738 end heaps---------- 00:04:24.738 9 mempools totaling size 603.782043 MiB 00:04:24.738 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:24.738 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:24.738 size: 100.555481 MiB name: bdev_io_2304940 00:04:24.738 size: 50.003479 MiB name: msgpool_2304940 00:04:24.738 size: 36.509338 MiB name: fsdev_io_2304940 00:04:24.738 size: 21.763794 MiB name: PDU_Pool 00:04:24.738 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:24.738 size: 4.133484 MiB name: evtpool_2304940 00:04:24.738 size: 0.026123 MiB name: Session_Pool 00:04:24.738 end mempools------- 00:04:24.738 6 memzones totaling size 4.142822 MiB 00:04:24.738 size: 1.000366 MiB name: RG_ring_0_2304940 00:04:24.738 size: 1.000366 MiB name: RG_ring_1_2304940 00:04:24.738 size: 1.000366 MiB name: RG_ring_4_2304940 00:04:24.738 size: 1.000366 MiB name: RG_ring_5_2304940 00:04:24.738 size: 0.125366 MiB name: RG_ring_2_2304940 00:04:24.738 size: 0.015991 MiB name: RG_ring_3_2304940 00:04:24.738 end memzones------- 00:04:24.738 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:24.738 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:24.738 list of free elements. size: 10.852478 MiB 00:04:24.738 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:24.738 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:24.738 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:24.738 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:24.738 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:24.738 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:24.738 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:24.738 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:24.738 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:24.738 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:24.738 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:24.738 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:24.738 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:24.738 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:24.738 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:24.738 list of standard malloc elements. size: 199.218628 MiB 00:04:24.738 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:24.738 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:24.738 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:24.738 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:24.738 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:24.738 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:24.738 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:24.738 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:24.738 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:24.738 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:24.738 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:24.738 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:24.738 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:24.738 list of memzone associated elements. size: 607.928894 MiB 00:04:24.738 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:24.738 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:24.738 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:24.738 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:24.738 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:24.738 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2304940_0 00:04:24.738 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:24.738 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2304940_0 00:04:24.738 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:24.738 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2304940_0 00:04:24.738 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:24.738 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:24.738 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:24.738 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:24.738 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:24.738 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2304940_0 00:04:24.738 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:24.738 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2304940 00:04:24.738 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:24.738 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2304940 00:04:24.738 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:24.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:24.738 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:24.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:24.738 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:24.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:24.738 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:24.738 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:24.738 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:24.738 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2304940 00:04:24.738 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:24.738 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2304940 00:04:24.738 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:24.738 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2304940 00:04:24.738 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:24.739 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2304940 00:04:24.739 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:24.739 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2304940 00:04:24.739 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:24.739 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2304940 00:04:24.739 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:24.739 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:24.739 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:24.739 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:24.739 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:24.739 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:24.739 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:24.739 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2304940 00:04:24.739 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:24.739 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2304940 00:04:24.739 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:24.739 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:24.739 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:24.739 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:24.739 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:24.739 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2304940 00:04:24.739 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:24.739 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:24.739 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:24.739 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2304940 00:04:24.739 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:24.739 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2304940 00:04:24.739 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:24.739 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2304940 00:04:24.739 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:24.739 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:24.739 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:24.739 16:58:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2304940 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2304940 ']' 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2304940 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304940 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304940' 00:04:24.739 killing process with pid 2304940 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2304940 00:04:24.739 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2304940 00:04:24.998 00:04:24.998 real 0m1.017s 00:04:24.998 user 0m0.950s 00:04:24.998 sys 0m0.410s 00:04:24.998 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.998 16:58:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:24.998 ************************************ 00:04:24.998 END TEST dpdk_mem_utility 00:04:24.998 ************************************ 00:04:24.998 16:58:42 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:24.998 16:58:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.998 16:58:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.998 16:58:43 -- common/autotest_common.sh@10 -- # set +x 00:04:24.998 ************************************ 00:04:24.998 START TEST event 00:04:24.998 ************************************ 00:04:24.998 16:58:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:25.256 * Looking for test storage... 00:04:25.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:25.256 16:58:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.256 16:58:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.256 16:58:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.256 16:58:43 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.256 16:58:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.256 16:58:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.256 16:58:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.256 16:58:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.256 16:58:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.256 16:58:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.256 16:58:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.256 16:58:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.256 16:58:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.257 16:58:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.257 16:58:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.257 16:58:43 event -- scripts/common.sh@344 -- # case "$op" in 00:04:25.257 16:58:43 event -- scripts/common.sh@345 -- # : 1 00:04:25.257 16:58:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.257 16:58:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.257 16:58:43 event -- scripts/common.sh@365 -- # decimal 1 00:04:25.257 16:58:43 event -- scripts/common.sh@353 -- # local d=1 00:04:25.257 16:58:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.257 16:58:43 event -- scripts/common.sh@355 -- # echo 1 00:04:25.257 16:58:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.257 16:58:43 event -- scripts/common.sh@366 -- # decimal 2 00:04:25.257 16:58:43 event -- scripts/common.sh@353 -- # local d=2 00:04:25.257 16:58:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.257 16:58:43 event -- scripts/common.sh@355 -- # echo 2 00:04:25.257 16:58:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.257 16:58:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.257 16:58:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.257 16:58:43 event -- scripts/common.sh@368 -- # return 0 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.257 --rc genhtml_branch_coverage=1 00:04:25.257 --rc genhtml_function_coverage=1 00:04:25.257 --rc genhtml_legend=1 00:04:25.257 --rc geninfo_all_blocks=1 00:04:25.257 --rc geninfo_unexecuted_blocks=1 00:04:25.257 00:04:25.257 ' 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.257 --rc genhtml_branch_coverage=1 00:04:25.257 --rc genhtml_function_coverage=1 00:04:25.257 --rc genhtml_legend=1 00:04:25.257 --rc geninfo_all_blocks=1 00:04:25.257 --rc geninfo_unexecuted_blocks=1 00:04:25.257 00:04:25.257 ' 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.257 --rc genhtml_branch_coverage=1 00:04:25.257 --rc genhtml_function_coverage=1 00:04:25.257 --rc genhtml_legend=1 00:04:25.257 --rc geninfo_all_blocks=1 00:04:25.257 --rc geninfo_unexecuted_blocks=1 00:04:25.257 00:04:25.257 ' 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.257 --rc genhtml_branch_coverage=1 00:04:25.257 --rc genhtml_function_coverage=1 00:04:25.257 --rc genhtml_legend=1 00:04:25.257 --rc geninfo_all_blocks=1 00:04:25.257 --rc geninfo_unexecuted_blocks=1 00:04:25.257 00:04:25.257 ' 00:04:25.257 16:58:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:25.257 16:58:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:25.257 16:58:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:25.257 16:58:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.257 16:58:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.257 ************************************ 00:04:25.257 START TEST event_perf 00:04:25.257 ************************************ 00:04:25.257 16:58:43 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:25.257 Running I/O for 1 seconds...[2024-11-20 16:58:43.273143] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:25.257 [2024-11-20 16:58:43.273217] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305229 ] 00:04:25.515 [2024-11-20 16:58:43.341873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:25.515 [2024-11-20 16:58:43.385601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.515 [2024-11-20 16:58:43.385712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.515 [2024-11-20 16:58:43.385817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.515 Running I/O for 1 seconds...[2024-11-20 16:58:43.385818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.447 00:04:26.447 lcore 0: 206235 00:04:26.447 lcore 1: 206235 00:04:26.447 lcore 2: 206235 00:04:26.447 lcore 3: 206235 00:04:26.447 done. 00:04:26.447 00:04:26.447 real 0m1.175s 00:04:26.447 user 0m4.096s 00:04:26.447 sys 0m0.075s 00:04:26.447 16:58:44 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.447 16:58:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.447 ************************************ 00:04:26.447 END TEST event_perf 00:04:26.447 ************************************ 00:04:26.447 16:58:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.447 16:58:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:26.447 16:58:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.447 16:58:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.706 ************************************ 00:04:26.706 START TEST event_reactor 00:04:26.706 ************************************ 00:04:26.706 16:58:44 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:26.706 [2024-11-20 16:58:44.513456] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:26.706 [2024-11-20 16:58:44.513527] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305484 ] 00:04:26.706 [2024-11-20 16:58:44.593266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.706 [2024-11-20 16:58:44.632594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.649 test_start 00:04:27.649 oneshot 00:04:27.649 tick 100 00:04:27.649 tick 100 00:04:27.649 tick 250 00:04:27.649 tick 100 00:04:27.649 tick 100 00:04:27.649 tick 250 00:04:27.649 tick 100 00:04:27.649 tick 500 00:04:27.649 tick 100 00:04:27.649 tick 100 00:04:27.649 tick 250 00:04:27.649 tick 100 00:04:27.649 tick 100 00:04:27.649 test_end 00:04:27.649 00:04:27.649 real 0m1.178s 00:04:27.649 user 0m1.098s 00:04:27.649 sys 0m0.076s 00:04:27.649 16:58:45 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.649 16:58:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:27.649 ************************************ 00:04:27.649 END TEST event_reactor 00:04:27.649 ************************************ 00:04:27.908 16:58:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.908 16:58:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:27.908 16:58:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.908 16:58:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.908 ************************************ 00:04:27.908 START TEST event_reactor_perf 00:04:27.908 ************************************ 00:04:27.908 16:58:45 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:27.908 [2024-11-20 16:58:45.763355] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:27.908 [2024-11-20 16:58:45.763422] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305731 ] 00:04:27.908 [2024-11-20 16:58:45.844767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.908 [2024-11-20 16:58:45.885257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.281 test_start 00:04:29.281 test_end 00:04:29.281 Performance: 522033 events per second 00:04:29.281 00:04:29.281 real 0m1.181s 00:04:29.281 user 0m1.093s 00:04:29.281 sys 0m0.083s 00:04:29.281 16:58:46 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.281 16:58:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.281 ************************************ 00:04:29.281 END TEST event_reactor_perf 00:04:29.281 ************************************ 00:04:29.281 16:58:46 event -- event/event.sh@49 -- # uname -s 00:04:29.281 16:58:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:29.281 16:58:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.281 16:58:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.281 16:58:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.281 16:58:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.281 ************************************ 00:04:29.281 START TEST event_scheduler 00:04:29.281 ************************************ 00:04:29.281 16:58:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:29.281 * Looking for test storage... 00:04:29.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:29.281 16:58:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.281 16:58:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.281 16:58:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.281 16:58:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:29.281 16:58:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.282 16:58:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.282 --rc genhtml_branch_coverage=1 00:04:29.282 --rc genhtml_function_coverage=1 00:04:29.282 --rc genhtml_legend=1 00:04:29.282 --rc geninfo_all_blocks=1 00:04:29.282 --rc geninfo_unexecuted_blocks=1 00:04:29.282 00:04:29.282 ' 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.282 --rc genhtml_branch_coverage=1 00:04:29.282 --rc genhtml_function_coverage=1 00:04:29.282 --rc genhtml_legend=1 00:04:29.282 --rc geninfo_all_blocks=1 00:04:29.282 --rc geninfo_unexecuted_blocks=1 00:04:29.282 00:04:29.282 ' 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.282 --rc genhtml_branch_coverage=1 00:04:29.282 --rc genhtml_function_coverage=1 00:04:29.282 --rc genhtml_legend=1 00:04:29.282 --rc geninfo_all_blocks=1 00:04:29.282 --rc geninfo_unexecuted_blocks=1 00:04:29.282 00:04:29.282 ' 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.282 --rc genhtml_branch_coverage=1 00:04:29.282 --rc genhtml_function_coverage=1 00:04:29.282 --rc genhtml_legend=1 00:04:29.282 --rc geninfo_all_blocks=1 00:04:29.282 --rc geninfo_unexecuted_blocks=1 00:04:29.282 00:04:29.282 ' 00:04:29.282 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:29.282 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2306017 00:04:29.282 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.282 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:29.282 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2306017 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2306017 ']' 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.282 16:58:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 [2024-11-20 16:58:47.219508] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:29.282 [2024-11-20 16:58:47.219551] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306017 ] 00:04:29.282 [2024-11-20 16:58:47.292238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.541 [2024-11-20 16:58:47.337949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.541 [2024-11-20 16:58:47.338059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.541 [2024-11-20 16:58:47.338167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.541 [2024-11-20 16:58:47.338168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:29.541 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 [2024-11-20 16:58:47.374725] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:29.541 [2024-11-20 16:58:47.374741] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:29.541 [2024-11-20 16:58:47.374751] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:29.541 [2024-11-20 16:58:47.374756] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:29.541 [2024-11-20 16:58:47.374761] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 [2024-11-20 16:58:47.449057] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 ************************************ 00:04:29.541 START TEST scheduler_create_thread 00:04:29.541 ************************************ 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 2 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 3 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 4 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 5 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 6 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 7 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 8 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 9 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.541 10 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.541 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.800 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.800 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.800 16:58:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.800 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.800 16:58:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.734 16:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.734 16:58:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:30.734 16:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.734 16:58:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.120 16:58:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.120 16:58:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:32.120 16:58:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:32.120 16:58:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.120 16:58:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.053 16:58:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.053 00:04:33.053 real 0m3.378s 00:04:33.053 user 0m0.023s 00:04:33.053 sys 0m0.007s 00:04:33.053 16:58:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.053 16:58:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.053 ************************************ 00:04:33.053 END TEST scheduler_create_thread 00:04:33.053 ************************************ 00:04:33.053 16:58:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:33.053 16:58:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2306017 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2306017 ']' 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2306017 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306017 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306017' 00:04:33.053 killing process with pid 2306017 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2306017 00:04:33.053 16:58:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2306017 00:04:33.311 [2024-11-20 16:58:51.245174] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:33.570 00:04:33.570 real 0m4.452s 00:04:33.570 user 0m7.775s 00:04:33.570 sys 0m0.371s 00:04:33.570 16:58:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.570 16:58:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.570 ************************************ 00:04:33.570 END TEST event_scheduler 00:04:33.570 ************************************ 00:04:33.570 16:58:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:33.570 16:58:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:33.570 16:58:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.570 16:58:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.570 16:58:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.570 ************************************ 00:04:33.570 START TEST app_repeat 00:04:33.570 ************************************ 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2306762 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2306762' 00:04:33.570 Process app_repeat pid: 2306762 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.570 spdk_app_start Round 0 00:04:33.570 16:58:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2306762 /var/tmp/spdk-nbd.sock 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2306762 ']' 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.570 16:58:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.570 [2024-11-20 16:58:51.563302] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:33.570 [2024-11-20 16:58:51.563350] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306762 ] 00:04:33.829 [2024-11-20 16:58:51.637030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.829 [2024-11-20 16:58:51.680811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.829 [2024-11-20 16:58:51.680813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.829 16:58:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.829 16:58:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.829 16:58:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.088 Malloc0 00:04:34.088 16:58:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.347 Malloc1 00:04:34.347 16:58:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.347 16:58:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.347 /dev/nbd0 00:04:34.605 16:58:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.605 16:58:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.605 1+0 records in 00:04:34.605 1+0 records out 00:04:34.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235245 s, 17.4 MB/s 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.605 16:58:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.605 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.605 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.605 16:58:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.605 /dev/nbd1 00:04:34.606 16:58:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.606 16:58:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.606 16:58:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.606 16:58:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.606 16:58:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.606 16:58:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.606 16:58:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.864 1+0 records in 00:04:34.864 1+0 records out 00:04:34.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148859 s, 27.5 MB/s 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.864 16:58:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.864 { 00:04:34.864 "nbd_device": "/dev/nbd0", 00:04:34.864 "bdev_name": "Malloc0" 00:04:34.864 }, 00:04:34.864 { 00:04:34.864 "nbd_device": "/dev/nbd1", 00:04:34.864 "bdev_name": "Malloc1" 00:04:34.864 } 00:04:34.864 ]' 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.864 { 00:04:34.864 "nbd_device": "/dev/nbd0", 00:04:34.864 "bdev_name": "Malloc0" 00:04:34.864 }, 00:04:34.864 { 00:04:34.864 "nbd_device": "/dev/nbd1", 00:04:34.864 "bdev_name": "Malloc1" 00:04:34.864 } 00:04:34.864 ]' 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.864 /dev/nbd1' 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.864 /dev/nbd1' 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.864 16:58:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.865 16:58:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.124 256+0 records in 00:04:35.124 256+0 records out 00:04:35.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106519 s, 98.4 MB/s 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.124 256+0 records in 00:04:35.124 256+0 records out 00:04:35.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134256 s, 78.1 MB/s 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.124 256+0 records in 00:04:35.124 256+0 records out 00:04:35.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146194 s, 71.7 MB/s 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.124 16:58:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.402 16:58:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:35.715 16:58:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:35.715 16:58:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.987 16:58:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.987 [2024-11-20 16:58:53.997525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.285 [2024-11-20 16:58:54.036032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.285 [2024-11-20 16:58:54.036033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.285 [2024-11-20 16:58:54.076733] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.285 [2024-11-20 16:58:54.076777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.821 16:58:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.821 16:58:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.821 spdk_app_start Round 1 00:04:38.821 16:58:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2306762 /var/tmp/spdk-nbd.sock 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2306762 ']' 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.821 16:58:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.079 16:58:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.079 16:58:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.079 16:58:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.338 Malloc0 00:04:39.338 16:58:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.597 Malloc1 00:04:39.597 16:58:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.597 16:58:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.597 16:58:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.597 16:58:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.598 16:58:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.857 /dev/nbd0 00:04:39.857 16:58:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.857 16:58:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.857 1+0 records in 00:04:39.857 1+0 records out 00:04:39.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238562 s, 17.2 MB/s 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.857 16:58:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.857 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.857 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.857 16:58:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.115 /dev/nbd1 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.115 1+0 records in 00:04:40.115 1+0 records out 00:04:40.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247121 s, 16.6 MB/s 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.115 16:58:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.115 16:58:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.374 { 00:04:40.374 "nbd_device": "/dev/nbd0", 00:04:40.374 "bdev_name": "Malloc0" 00:04:40.374 }, 00:04:40.374 { 00:04:40.374 "nbd_device": "/dev/nbd1", 00:04:40.374 "bdev_name": "Malloc1" 00:04:40.374 } 00:04:40.374 ]' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.374 { 00:04:40.374 "nbd_device": "/dev/nbd0", 00:04:40.374 "bdev_name": "Malloc0" 00:04:40.374 }, 00:04:40.374 { 00:04:40.374 "nbd_device": "/dev/nbd1", 00:04:40.374 "bdev_name": "Malloc1" 00:04:40.374 } 00:04:40.374 ]' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.374 /dev/nbd1' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.374 /dev/nbd1' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.374 256+0 records in 00:04:40.374 256+0 records out 00:04:40.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542897 s, 193 MB/s 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.374 256+0 records in 00:04:40.374 256+0 records out 00:04:40.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140035 s, 74.9 MB/s 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.374 256+0 records in 00:04:40.374 256+0 records out 00:04:40.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149351 s, 70.2 MB/s 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.374 16:58:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.633 16:58:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.891 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.149 16:58:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.149 16:58:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.149 16:58:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.407 [2024-11-20 16:58:59.299155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.407 [2024-11-20 16:58:59.335813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.407 [2024-11-20 16:58:59.335814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.407 [2024-11-20 16:58:59.376801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.407 [2024-11-20 16:58:59.376841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.690 16:59:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.690 16:59:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:44.690 spdk_app_start Round 2 00:04:44.690 16:59:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2306762 /var/tmp/spdk-nbd.sock 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2306762 ']' 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.690 16:59:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.690 16:59:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.690 Malloc0 00:04:44.690 16:59:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.947 Malloc1 00:04:44.947 16:59:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.947 16:59:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.948 16:59:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.948 /dev/nbd0 00:04:45.207 16:59:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.207 16:59:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.207 1+0 records in 00:04:45.207 1+0 records out 00:04:45.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001691 s, 24.2 MB/s 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.207 16:59:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.207 16:59:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.207 16:59:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.207 16:59:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.207 /dev/nbd1 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.465 1+0 records in 00:04:45.465 1+0 records out 00:04:45.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188389 s, 21.7 MB/s 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.465 16:59:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.465 { 00:04:45.465 "nbd_device": "/dev/nbd0", 00:04:45.465 "bdev_name": "Malloc0" 00:04:45.465 }, 00:04:45.465 { 00:04:45.465 "nbd_device": "/dev/nbd1", 00:04:45.465 "bdev_name": "Malloc1" 00:04:45.465 } 00:04:45.465 ]' 00:04:45.465 16:59:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.465 { 00:04:45.465 "nbd_device": "/dev/nbd0", 00:04:45.465 "bdev_name": "Malloc0" 00:04:45.465 }, 00:04:45.465 { 00:04:45.466 "nbd_device": "/dev/nbd1", 00:04:45.466 "bdev_name": "Malloc1" 00:04:45.466 } 00:04:45.466 ]' 00:04:45.466 16:59:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.724 16:59:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.724 /dev/nbd1' 00:04:45.724 16:59:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.724 16:59:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.724 /dev/nbd1' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.725 256+0 records in 00:04:45.725 256+0 records out 00:04:45.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995649 s, 105 MB/s 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.725 256+0 records in 00:04:45.725 256+0 records out 00:04:45.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137523 s, 76.2 MB/s 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.725 256+0 records in 00:04:45.725 256+0 records out 00:04:45.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149384 s, 70.2 MB/s 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.725 16:59:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.984 16:59:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.984 16:59:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.984 16:59:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.241 16:59:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.241 16:59:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.499 16:59:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.758 [2024-11-20 16:59:04.607607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.758 [2024-11-20 16:59:04.644260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.758 [2024-11-20 16:59:04.644260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.758 [2024-11-20 16:59:04.684870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.758 [2024-11-20 16:59:04.684910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.042 16:59:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2306762 /var/tmp/spdk-nbd.sock 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2306762 ']' 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.042 16:59:07 event.app_repeat -- event/event.sh@39 -- # killprocess 2306762 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2306762 ']' 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2306762 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306762 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306762' 00:04:50.042 killing process with pid 2306762 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2306762 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2306762 00:04:50.042 spdk_app_start is called in Round 0. 00:04:50.042 Shutdown signal received, stop current app iteration 00:04:50.042 Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 reinitialization... 00:04:50.042 spdk_app_start is called in Round 1. 00:04:50.042 Shutdown signal received, stop current app iteration 00:04:50.042 Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 reinitialization... 00:04:50.042 spdk_app_start is called in Round 2. 00:04:50.042 Shutdown signal received, stop current app iteration 00:04:50.042 Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 reinitialization... 00:04:50.042 spdk_app_start is called in Round 3. 00:04:50.042 Shutdown signal received, stop current app iteration 00:04:50.042 16:59:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.042 16:59:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.042 00:04:50.042 real 0m16.320s 00:04:50.042 user 0m35.849s 00:04:50.042 sys 0m2.483s 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.042 16:59:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.042 ************************************ 00:04:50.042 END TEST app_repeat 00:04:50.042 ************************************ 00:04:50.042 16:59:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.042 16:59:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.042 16:59:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.042 16:59:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.042 16:59:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.042 ************************************ 00:04:50.042 START TEST cpu_locks 00:04:50.042 ************************************ 00:04:50.042 16:59:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:50.042 * Looking for test storage... 00:04:50.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:50.042 16:59:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.042 16:59:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.042 16:59:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.042 16:59:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.042 16:59:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.043 16:59:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.302 16:59:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.302 --rc genhtml_branch_coverage=1 00:04:50.302 --rc genhtml_function_coverage=1 00:04:50.302 --rc genhtml_legend=1 00:04:50.302 --rc geninfo_all_blocks=1 00:04:50.302 --rc geninfo_unexecuted_blocks=1 00:04:50.302 00:04:50.302 ' 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.302 --rc genhtml_branch_coverage=1 00:04:50.302 --rc genhtml_function_coverage=1 00:04:50.302 --rc genhtml_legend=1 00:04:50.302 --rc geninfo_all_blocks=1 00:04:50.302 --rc geninfo_unexecuted_blocks=1 00:04:50.302 00:04:50.302 ' 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.302 --rc genhtml_branch_coverage=1 00:04:50.302 --rc genhtml_function_coverage=1 00:04:50.302 --rc genhtml_legend=1 00:04:50.302 --rc geninfo_all_blocks=1 00:04:50.302 --rc geninfo_unexecuted_blocks=1 00:04:50.302 00:04:50.302 ' 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.302 --rc genhtml_branch_coverage=1 00:04:50.302 --rc genhtml_function_coverage=1 00:04:50.302 --rc genhtml_legend=1 00:04:50.302 --rc geninfo_all_blocks=1 00:04:50.302 --rc geninfo_unexecuted_blocks=1 00:04:50.302 00:04:50.302 ' 00:04:50.302 16:59:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.302 16:59:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.302 16:59:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.302 16:59:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.302 16:59:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.302 ************************************ 00:04:50.302 START TEST default_locks 00:04:50.302 ************************************ 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2309768 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2309768 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2309768 ']' 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.302 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.302 [2024-11-20 16:59:08.179138] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:50.302 [2024-11-20 16:59:08.179178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309768 ] 00:04:50.302 [2024-11-20 16:59:08.255450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.302 [2024-11-20 16:59:08.297259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.561 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.561 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:50.561 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2309768 00:04:50.561 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2309768 00:04:50.561 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.820 lslocks: write error 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2309768 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2309768 ']' 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2309768 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309768 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309768' 00:04:50.820 killing process with pid 2309768 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2309768 00:04:50.820 16:59:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2309768 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2309768 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2309768 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2309768 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2309768 ']' 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.387 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2309768) - No such process 00:04:51.388 ERROR: process (pid: 2309768) is no longer running 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.388 00:04:51.388 real 0m1.004s 00:04:51.388 user 0m0.944s 00:04:51.388 sys 0m0.454s 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.388 16:59:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 ************************************ 00:04:51.388 END TEST default_locks 00:04:51.388 ************************************ 00:04:51.388 16:59:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:51.388 16:59:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.388 16:59:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.388 16:59:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 ************************************ 00:04:51.388 START TEST default_locks_via_rpc 00:04:51.388 ************************************ 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2310023 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2310023 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2310023 ']' 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.388 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.388 [2024-11-20 16:59:09.249729] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:51.388 [2024-11-20 16:59:09.249766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310023 ] 00:04:51.388 [2024-11-20 16:59:09.323124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.388 [2024-11-20 16:59:09.364883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2310023 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2310023 00:04:51.647 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2310023 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2310023 ']' 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2310023 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310023 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310023' 00:04:51.905 killing process with pid 2310023 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2310023 00:04:51.905 16:59:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2310023 00:04:52.163 00:04:52.163 real 0m0.983s 00:04:52.163 user 0m0.925s 00:04:52.163 sys 0m0.445s 00:04:52.163 16:59:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.163 16:59:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.163 ************************************ 00:04:52.163 END TEST default_locks_via_rpc 00:04:52.163 ************************************ 00:04:52.421 16:59:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:52.421 16:59:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.421 16:59:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.421 16:59:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.421 ************************************ 00:04:52.421 START TEST non_locking_app_on_locked_coremask 00:04:52.421 ************************************ 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2310273 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2310273 /var/tmp/spdk.sock 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2310273 ']' 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.421 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.422 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.422 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.422 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.422 [2024-11-20 16:59:10.299948] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:52.422 [2024-11-20 16:59:10.299990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310273 ] 00:04:52.422 [2024-11-20 16:59:10.374234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.422 [2024-11-20 16:59:10.416291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2310283 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2310283 /var/tmp/spdk2.sock 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2310283 ']' 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.680 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.681 16:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.681 [2024-11-20 16:59:10.681388] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:52.681 [2024-11-20 16:59:10.681434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310283 ] 00:04:52.939 [2024-11-20 16:59:10.763849] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.939 [2024-11-20 16:59:10.763870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.939 [2024-11-20 16:59:10.844216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.506 16:59:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.506 16:59:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.506 16:59:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2310273 00:04:53.506 16:59:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2310273 00:04:53.506 16:59:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.073 lslocks: write error 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2310273 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2310273 ']' 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2310273 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310273 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310273' 00:04:54.073 killing process with pid 2310273 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2310273 00:04:54.073 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2310273 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2310283 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2310283 ']' 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2310283 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.640 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310283 00:04:54.899 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.899 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.899 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310283' 00:04:54.899 killing process with pid 2310283 00:04:54.899 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2310283 00:04:54.899 16:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2310283 00:04:55.158 00:04:55.158 real 0m2.764s 00:04:55.158 user 0m2.899s 00:04:55.158 sys 0m0.901s 00:04:55.158 16:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.158 16:59:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.158 ************************************ 00:04:55.158 END TEST non_locking_app_on_locked_coremask 00:04:55.158 ************************************ 00:04:55.158 16:59:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:55.158 16:59:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.158 16:59:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.158 16:59:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.158 ************************************ 00:04:55.158 START TEST locking_app_on_unlocked_coremask 00:04:55.158 ************************************ 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2310775 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2310775 /var/tmp/spdk.sock 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2310775 ']' 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.158 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.158 [2024-11-20 16:59:13.134345] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:55.158 [2024-11-20 16:59:13.134391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310775 ] 00:04:55.417 [2024-11-20 16:59:13.209637] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.417 [2024-11-20 16:59:13.209661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.417 [2024-11-20 16:59:13.247102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2310783 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2310783 /var/tmp/spdk2.sock 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2310783 ']' 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.676 16:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.676 [2024-11-20 16:59:13.525478] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:55.676 [2024-11-20 16:59:13.525531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310783 ] 00:04:55.676 [2024-11-20 16:59:13.612540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.676 [2024-11-20 16:59:13.694344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.609 16:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.609 16:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.609 16:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2310783 00:04:56.609 16:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2310783 00:04:56.609 16:59:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.176 lslocks: write error 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2310775 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2310775 ']' 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2310775 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310775 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310775' 00:04:57.176 killing process with pid 2310775 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2310775 00:04:57.176 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2310775 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2310783 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2310783 ']' 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2310783 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310783 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310783' 00:04:57.745 killing process with pid 2310783 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2310783 00:04:57.745 16:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2310783 00:04:58.004 00:04:58.004 real 0m2.931s 00:04:58.004 user 0m3.073s 00:04:58.004 sys 0m0.969s 00:04:58.004 16:59:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.004 16:59:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.004 ************************************ 00:04:58.004 END TEST locking_app_on_unlocked_coremask 00:04:58.004 ************************************ 00:04:58.263 16:59:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:58.263 16:59:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.263 16:59:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.263 16:59:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.263 ************************************ 00:04:58.263 START TEST locking_app_on_locked_coremask 00:04:58.263 ************************************ 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2311276 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2311276 /var/tmp/spdk.sock 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2311276 ']' 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.263 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.263 [2024-11-20 16:59:16.134539] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:58.263 [2024-11-20 16:59:16.134584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311276 ] 00:04:58.263 [2024-11-20 16:59:16.211272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.263 [2024-11-20 16:59:16.252435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2311502 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2311502 /var/tmp/spdk2.sock 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2311502 /var/tmp/spdk2.sock 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2311502 /var/tmp/spdk2.sock 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2311502 ']' 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.199 16:59:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 [2024-11-20 16:59:17.012184] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:04:59.199 [2024-11-20 16:59:17.012241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311502 ] 00:04:59.199 [2024-11-20 16:59:17.101475] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2311276 has claimed it. 00:04:59.199 [2024-11-20 16:59:17.101515] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2311502) - No such process 00:04:59.767 ERROR: process (pid: 2311502) is no longer running 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2311276 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2311276 00:04:59.767 16:59:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.334 lslocks: write error 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2311276 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2311276 ']' 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2311276 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311276 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311276' 00:05:00.334 killing process with pid 2311276 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2311276 00:05:00.334 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2311276 00:05:00.593 00:05:00.593 real 0m2.342s 00:05:00.593 user 0m2.600s 00:05:00.593 sys 0m0.663s 00:05:00.593 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.593 16:59:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 END TEST locking_app_on_locked_coremask 00:05:00.593 ************************************ 00:05:00.593 16:59:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:00.593 16:59:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.593 16:59:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.593 16:59:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 ************************************ 00:05:00.593 START TEST locking_overlapped_coremask 00:05:00.593 ************************************ 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2311768 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2311768 /var/tmp/spdk.sock 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2311768 ']' 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.593 16:59:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.593 [2024-11-20 16:59:18.547860] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:00.593 [2024-11-20 16:59:18.547897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311768 ] 00:05:00.593 [2024-11-20 16:59:18.626122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.852 [2024-11-20 16:59:18.670526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.852 [2024-11-20 16:59:18.670586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.852 [2024-11-20 16:59:18.670587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2311872 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2311872 /var/tmp/spdk2.sock 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2311872 /var/tmp/spdk2.sock 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2311872 /var/tmp/spdk2.sock 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2311872 ']' 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.418 16:59:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.418 [2024-11-20 16:59:19.432304] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:01.418 [2024-11-20 16:59:19.432356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311872 ] 00:05:01.676 [2024-11-20 16:59:19.525657] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2311768 has claimed it. 00:05:01.676 [2024-11-20 16:59:19.525693] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2311872) - No such process 00:05:02.243 ERROR: process (pid: 2311872) is no longer running 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2311768 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2311768 ']' 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2311768 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311768 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311768' 00:05:02.243 killing process with pid 2311768 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2311768 00:05:02.243 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2311768 00:05:02.502 00:05:02.502 real 0m1.941s 00:05:02.502 user 0m5.608s 00:05:02.502 sys 0m0.420s 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.502 ************************************ 00:05:02.502 END TEST locking_overlapped_coremask 00:05:02.502 ************************************ 00:05:02.502 16:59:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:02.502 16:59:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.502 16:59:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.502 16:59:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.502 ************************************ 00:05:02.502 START TEST locking_overlapped_coremask_via_rpc 00:05:02.502 ************************************ 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2312043 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2312043 /var/tmp/spdk.sock 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2312043 ']' 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.502 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.761 [2024-11-20 16:59:20.555382] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:02.761 [2024-11-20 16:59:20.555428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312043 ] 00:05:02.761 [2024-11-20 16:59:20.629355] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.761 [2024-11-20 16:59:20.629381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.761 [2024-11-20 16:59:20.670522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.761 [2024-11-20 16:59:20.670633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.761 [2024-11-20 16:59:20.670633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2312220 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2312220 /var/tmp/spdk2.sock 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2312220 ']' 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.019 16:59:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.019 [2024-11-20 16:59:20.942111] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:03.019 [2024-11-20 16:59:20.942162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312220 ] 00:05:03.019 [2024-11-20 16:59:21.035023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.019 [2024-11-20 16:59:21.035050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.277 [2024-11-20 16:59:21.122458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.277 [2024-11-20 16:59:21.122570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.277 [2024-11-20 16:59:21.122571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.844 [2024-11-20 16:59:21.799269] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2312043 has claimed it. 00:05:03.844 request: 00:05:03.844 { 00:05:03.844 "method": "framework_enable_cpumask_locks", 00:05:03.844 "req_id": 1 00:05:03.844 } 00:05:03.844 Got JSON-RPC error response 00:05:03.844 response: 00:05:03.844 { 00:05:03.844 "code": -32603, 00:05:03.844 "message": "Failed to claim CPU core: 2" 00:05:03.844 } 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2312043 /var/tmp/spdk.sock 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2312043 ']' 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.844 16:59:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2312220 /var/tmp/spdk2.sock 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2312220 ']' 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.102 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.361 00:05:04.361 real 0m1.719s 00:05:04.361 user 0m0.828s 00:05:04.361 sys 0m0.140s 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.361 16:59:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.361 ************************************ 00:05:04.361 END TEST locking_overlapped_coremask_via_rpc 00:05:04.361 ************************************ 00:05:04.361 16:59:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:04.361 16:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2312043 ]] 00:05:04.361 16:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2312043 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2312043 ']' 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2312043 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2312043 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2312043' 00:05:04.361 killing process with pid 2312043 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2312043 00:05:04.361 16:59:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2312043 00:05:04.619 16:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2312220 ]] 00:05:04.619 16:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2312220 00:05:04.619 16:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2312220 ']' 00:05:04.619 16:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2312220 00:05:04.619 16:59:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.619 16:59:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.619 16:59:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2312220 00:05:04.876 16:59:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.876 16:59:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.876 16:59:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2312220' 00:05:04.876 killing process with pid 2312220 00:05:04.876 16:59:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2312220 00:05:04.876 16:59:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2312220 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2312043 ]] 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2312043 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2312043 ']' 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2312043 00:05:05.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2312043) - No such process 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2312043 is not found' 00:05:05.135 Process with pid 2312043 is not found 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2312220 ]] 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2312220 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2312220 ']' 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2312220 00:05:05.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2312220) - No such process 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2312220 is not found' 00:05:05.135 Process with pid 2312220 is not found 00:05:05.135 16:59:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.135 00:05:05.135 real 0m15.071s 00:05:05.135 user 0m26.665s 00:05:05.135 sys 0m4.966s 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.135 16:59:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.135 ************************************ 00:05:05.135 END TEST cpu_locks 00:05:05.135 ************************************ 00:05:05.135 00:05:05.135 real 0m39.989s 00:05:05.135 user 1m16.836s 00:05:05.135 sys 0m8.447s 00:05:05.135 16:59:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.135 16:59:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.135 ************************************ 00:05:05.135 END TEST event 00:05:05.135 ************************************ 00:05:05.135 16:59:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:05.135 16:59:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.135 16:59:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.135 16:59:23 -- common/autotest_common.sh@10 -- # set +x 00:05:05.135 ************************************ 00:05:05.135 START TEST thread 00:05:05.135 ************************************ 00:05:05.135 16:59:23 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:05.135 * Looking for test storage... 00:05:05.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.395 16:59:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.395 16:59:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.395 16:59:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.395 16:59:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.395 16:59:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.395 16:59:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.395 16:59:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.395 16:59:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.395 16:59:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.395 16:59:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.395 16:59:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.395 16:59:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:05.395 16:59:23 thread -- scripts/common.sh@345 -- # : 1 00:05:05.395 16:59:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.395 16:59:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.395 16:59:23 thread -- scripts/common.sh@365 -- # decimal 1 00:05:05.395 16:59:23 thread -- scripts/common.sh@353 -- # local d=1 00:05:05.395 16:59:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.395 16:59:23 thread -- scripts/common.sh@355 -- # echo 1 00:05:05.395 16:59:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.395 16:59:23 thread -- scripts/common.sh@366 -- # decimal 2 00:05:05.395 16:59:23 thread -- scripts/common.sh@353 -- # local d=2 00:05:05.395 16:59:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.395 16:59:23 thread -- scripts/common.sh@355 -- # echo 2 00:05:05.395 16:59:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.395 16:59:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.395 16:59:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.395 16:59:23 thread -- scripts/common.sh@368 -- # return 0 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.395 --rc genhtml_branch_coverage=1 00:05:05.395 --rc genhtml_function_coverage=1 00:05:05.395 --rc genhtml_legend=1 00:05:05.395 --rc geninfo_all_blocks=1 00:05:05.395 --rc geninfo_unexecuted_blocks=1 00:05:05.395 00:05:05.395 ' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.395 --rc genhtml_branch_coverage=1 00:05:05.395 --rc genhtml_function_coverage=1 00:05:05.395 --rc genhtml_legend=1 00:05:05.395 --rc geninfo_all_blocks=1 00:05:05.395 --rc geninfo_unexecuted_blocks=1 00:05:05.395 00:05:05.395 ' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.395 --rc genhtml_branch_coverage=1 00:05:05.395 --rc genhtml_function_coverage=1 00:05:05.395 --rc genhtml_legend=1 00:05:05.395 --rc geninfo_all_blocks=1 00:05:05.395 --rc geninfo_unexecuted_blocks=1 00:05:05.395 00:05:05.395 ' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.395 --rc genhtml_branch_coverage=1 00:05:05.395 --rc genhtml_function_coverage=1 00:05:05.395 --rc genhtml_legend=1 00:05:05.395 --rc geninfo_all_blocks=1 00:05:05.395 --rc geninfo_unexecuted_blocks=1 00:05:05.395 00:05:05.395 ' 00:05:05.395 16:59:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.395 16:59:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.395 ************************************ 00:05:05.395 START TEST thread_poller_perf 00:05:05.395 ************************************ 00:05:05.395 16:59:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.395 [2024-11-20 16:59:23.319793] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:05.395 [2024-11-20 16:59:23.319866] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312620 ] 00:05:05.395 [2024-11-20 16:59:23.398021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.654 [2024-11-20 16:59:23.438736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.654 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:06.590 [2024-11-20T15:59:24.633Z] ====================================== 00:05:06.590 [2024-11-20T15:59:24.633Z] busy:2108491582 (cyc) 00:05:06.590 [2024-11-20T15:59:24.633Z] total_run_count: 422000 00:05:06.590 [2024-11-20T15:59:24.633Z] tsc_hz: 2100000000 (cyc) 00:05:06.590 [2024-11-20T15:59:24.633Z] ====================================== 00:05:06.590 [2024-11-20T15:59:24.633Z] poller_cost: 4996 (cyc), 2379 (nsec) 00:05:06.590 00:05:06.590 real 0m1.182s 00:05:06.590 user 0m1.102s 00:05:06.590 sys 0m0.075s 00:05:06.590 16:59:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.590 16:59:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.590 ************************************ 00:05:06.590 END TEST thread_poller_perf 00:05:06.590 ************************************ 00:05:06.590 16:59:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.590 16:59:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:06.590 16:59:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.590 16:59:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.590 ************************************ 00:05:06.590 START TEST thread_poller_perf 00:05:06.590 ************************************ 00:05:06.590 16:59:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.590 [2024-11-20 16:59:24.573492] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:06.590 [2024-11-20 16:59:24.573559] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312868 ] 00:05:06.849 [2024-11-20 16:59:24.652691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.849 [2024-11-20 16:59:24.692656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.849 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:07.785 [2024-11-20T15:59:25.828Z] ====================================== 00:05:07.785 [2024-11-20T15:59:25.828Z] busy:2101215098 (cyc) 00:05:07.785 [2024-11-20T15:59:25.828Z] total_run_count: 5446000 00:05:07.785 [2024-11-20T15:59:25.828Z] tsc_hz: 2100000000 (cyc) 00:05:07.785 [2024-11-20T15:59:25.828Z] ====================================== 00:05:07.785 [2024-11-20T15:59:25.828Z] poller_cost: 385 (cyc), 183 (nsec) 00:05:07.785 00:05:07.785 real 0m1.180s 00:05:07.785 user 0m1.094s 00:05:07.785 sys 0m0.081s 00:05:07.785 16:59:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.785 16:59:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.785 ************************************ 00:05:07.785 END TEST thread_poller_perf 00:05:07.785 ************************************ 00:05:07.785 16:59:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:07.785 00:05:07.785 real 0m2.670s 00:05:07.785 user 0m2.343s 00:05:07.785 sys 0m0.340s 00:05:07.785 16:59:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.785 16:59:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.785 ************************************ 00:05:07.785 END TEST thread 00:05:07.785 ************************************ 00:05:07.785 16:59:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:07.785 16:59:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.785 16:59:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.785 16:59:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.785 16:59:25 -- common/autotest_common.sh@10 -- # set +x 00:05:08.045 ************************************ 00:05:08.045 START TEST app_cmdline 00:05:08.045 ************************************ 00:05:08.045 16:59:25 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:08.045 * Looking for test storage... 00:05:08.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:08.045 16:59:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.045 16:59:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.045 16:59:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.045 16:59:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:08.045 16:59:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.045 16:59:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.045 --rc genhtml_branch_coverage=1 00:05:08.045 --rc genhtml_function_coverage=1 00:05:08.045 --rc genhtml_legend=1 00:05:08.045 --rc geninfo_all_blocks=1 00:05:08.045 --rc geninfo_unexecuted_blocks=1 00:05:08.045 00:05:08.045 ' 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.045 --rc genhtml_branch_coverage=1 00:05:08.045 --rc genhtml_function_coverage=1 00:05:08.045 --rc genhtml_legend=1 00:05:08.045 --rc geninfo_all_blocks=1 00:05:08.045 --rc geninfo_unexecuted_blocks=1 00:05:08.045 00:05:08.045 ' 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.045 --rc genhtml_branch_coverage=1 00:05:08.045 --rc genhtml_function_coverage=1 00:05:08.045 --rc genhtml_legend=1 00:05:08.045 --rc geninfo_all_blocks=1 00:05:08.045 --rc geninfo_unexecuted_blocks=1 00:05:08.045 00:05:08.045 ' 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.045 --rc genhtml_branch_coverage=1 00:05:08.045 --rc genhtml_function_coverage=1 00:05:08.045 --rc genhtml_legend=1 00:05:08.045 --rc geninfo_all_blocks=1 00:05:08.045 --rc geninfo_unexecuted_blocks=1 00:05:08.045 00:05:08.045 ' 00:05:08.045 16:59:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:08.045 16:59:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2313163 00:05:08.045 16:59:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:08.045 16:59:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2313163 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2313163 ']' 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.045 16:59:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.045 [2024-11-20 16:59:26.066777] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:08.045 [2024-11-20 16:59:26.066823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313163 ] 00:05:08.304 [2024-11-20 16:59:26.139278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.304 [2024-11-20 16:59:26.181710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.562 16:59:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.562 16:59:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:08.562 { 00:05:08.562 "version": "SPDK v25.01-pre git sha1 0b4b4be7e", 00:05:08.562 "fields": { 00:05:08.562 "major": 25, 00:05:08.562 "minor": 1, 00:05:08.562 "patch": 0, 00:05:08.562 "suffix": "-pre", 00:05:08.562 "commit": "0b4b4be7e" 00:05:08.562 } 00:05:08.562 } 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:08.562 16:59:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:08.562 16:59:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.562 16:59:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.562 16:59:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.821 16:59:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:08.821 16:59:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:08.821 16:59:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.821 request: 00:05:08.821 { 00:05:08.821 "method": "env_dpdk_get_mem_stats", 00:05:08.821 "req_id": 1 00:05:08.821 } 00:05:08.821 Got JSON-RPC error response 00:05:08.821 response: 00:05:08.821 { 00:05:08.821 "code": -32601, 00:05:08.821 "message": "Method not found" 00:05:08.821 } 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.821 16:59:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2313163 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2313163 ']' 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2313163 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.821 16:59:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2313163 00:05:09.080 16:59:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.080 16:59:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.080 16:59:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2313163' 00:05:09.080 killing process with pid 2313163 00:05:09.080 16:59:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 2313163 00:05:09.080 16:59:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 2313163 00:05:09.339 00:05:09.339 real 0m1.325s 00:05:09.339 user 0m1.543s 00:05:09.339 sys 0m0.431s 00:05:09.339 16:59:27 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.339 16:59:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 ************************************ 00:05:09.339 END TEST app_cmdline 00:05:09.339 ************************************ 00:05:09.339 16:59:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:09.339 16:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.339 16:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.339 16:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 ************************************ 00:05:09.339 START TEST version 00:05:09.339 ************************************ 00:05:09.339 16:59:27 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:09.339 * Looking for test storage... 00:05:09.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:09.339 16:59:27 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.339 16:59:27 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.339 16:59:27 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.598 16:59:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.598 16:59:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.598 16:59:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.598 16:59:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.598 16:59:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.598 16:59:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.598 16:59:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.598 16:59:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.598 16:59:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.598 16:59:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.598 16:59:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.598 16:59:27 version -- scripts/common.sh@344 -- # case "$op" in 00:05:09.598 16:59:27 version -- scripts/common.sh@345 -- # : 1 00:05:09.598 16:59:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.598 16:59:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.598 16:59:27 version -- scripts/common.sh@365 -- # decimal 1 00:05:09.598 16:59:27 version -- scripts/common.sh@353 -- # local d=1 00:05:09.598 16:59:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.598 16:59:27 version -- scripts/common.sh@355 -- # echo 1 00:05:09.598 16:59:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.598 16:59:27 version -- scripts/common.sh@366 -- # decimal 2 00:05:09.598 16:59:27 version -- scripts/common.sh@353 -- # local d=2 00:05:09.598 16:59:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.598 16:59:27 version -- scripts/common.sh@355 -- # echo 2 00:05:09.598 16:59:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.598 16:59:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.598 16:59:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.598 16:59:27 version -- scripts/common.sh@368 -- # return 0 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.598 --rc genhtml_branch_coverage=1 00:05:09.598 --rc genhtml_function_coverage=1 00:05:09.598 --rc genhtml_legend=1 00:05:09.598 --rc geninfo_all_blocks=1 00:05:09.598 --rc geninfo_unexecuted_blocks=1 00:05:09.598 00:05:09.598 ' 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.598 --rc genhtml_branch_coverage=1 00:05:09.598 --rc genhtml_function_coverage=1 00:05:09.598 --rc genhtml_legend=1 00:05:09.598 --rc geninfo_all_blocks=1 00:05:09.598 --rc geninfo_unexecuted_blocks=1 00:05:09.598 00:05:09.598 ' 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.598 --rc genhtml_branch_coverage=1 00:05:09.598 --rc genhtml_function_coverage=1 00:05:09.598 --rc genhtml_legend=1 00:05:09.598 --rc geninfo_all_blocks=1 00:05:09.598 --rc geninfo_unexecuted_blocks=1 00:05:09.598 00:05:09.598 ' 00:05:09.598 16:59:27 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.598 --rc genhtml_branch_coverage=1 00:05:09.598 --rc genhtml_function_coverage=1 00:05:09.598 --rc genhtml_legend=1 00:05:09.598 --rc geninfo_all_blocks=1 00:05:09.599 --rc geninfo_unexecuted_blocks=1 00:05:09.599 00:05:09.599 ' 00:05:09.599 16:59:27 version -- app/version.sh@17 -- # get_header_version major 00:05:09.599 16:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # cut -f2 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.599 16:59:27 version -- app/version.sh@17 -- # major=25 00:05:09.599 16:59:27 version -- app/version.sh@18 -- # get_header_version minor 00:05:09.599 16:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # cut -f2 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.599 16:59:27 version -- app/version.sh@18 -- # minor=1 00:05:09.599 16:59:27 version -- app/version.sh@19 -- # get_header_version patch 00:05:09.599 16:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # cut -f2 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.599 16:59:27 version -- app/version.sh@19 -- # patch=0 00:05:09.599 16:59:27 version -- app/version.sh@20 -- # get_header_version suffix 00:05:09.599 16:59:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # cut -f2 00:05:09.599 16:59:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.599 16:59:27 version -- app/version.sh@20 -- # suffix=-pre 00:05:09.599 16:59:27 version -- app/version.sh@22 -- # version=25.1 00:05:09.599 16:59:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:09.599 16:59:27 version -- app/version.sh@28 -- # version=25.1rc0 00:05:09.599 16:59:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:09.599 16:59:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:09.599 16:59:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:09.599 16:59:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:09.599 00:05:09.599 real 0m0.244s 00:05:09.599 user 0m0.148s 00:05:09.599 sys 0m0.139s 00:05:09.599 16:59:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.599 16:59:27 version -- common/autotest_common.sh@10 -- # set +x 00:05:09.599 ************************************ 00:05:09.599 END TEST version 00:05:09.599 ************************************ 00:05:09.599 16:59:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:09.599 16:59:27 -- spdk/autotest.sh@194 -- # uname -s 00:05:09.599 16:59:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:09.599 16:59:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:09.599 16:59:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:09.599 16:59:27 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:09.599 16:59:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.599 16:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:09.599 16:59:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:09.599 16:59:27 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:09.599 16:59:27 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:09.599 16:59:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.599 16:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.599 16:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:09.599 ************************************ 00:05:09.599 START TEST nvmf_tcp 00:05:09.599 ************************************ 00:05:09.599 16:59:27 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:09.859 * Looking for test storage... 00:05:09.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.859 16:59:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.859 --rc genhtml_branch_coverage=1 00:05:09.859 --rc genhtml_function_coverage=1 00:05:09.859 --rc genhtml_legend=1 00:05:09.859 --rc geninfo_all_blocks=1 00:05:09.859 --rc geninfo_unexecuted_blocks=1 00:05:09.859 00:05:09.859 ' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.859 --rc genhtml_branch_coverage=1 00:05:09.859 --rc genhtml_function_coverage=1 00:05:09.859 --rc genhtml_legend=1 00:05:09.859 --rc geninfo_all_blocks=1 00:05:09.859 --rc geninfo_unexecuted_blocks=1 00:05:09.859 00:05:09.859 ' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.859 --rc genhtml_branch_coverage=1 00:05:09.859 --rc genhtml_function_coverage=1 00:05:09.859 --rc genhtml_legend=1 00:05:09.859 --rc geninfo_all_blocks=1 00:05:09.859 --rc geninfo_unexecuted_blocks=1 00:05:09.859 00:05:09.859 ' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.859 --rc genhtml_branch_coverage=1 00:05:09.859 --rc genhtml_function_coverage=1 00:05:09.859 --rc genhtml_legend=1 00:05:09.859 --rc geninfo_all_blocks=1 00:05:09.859 --rc geninfo_unexecuted_blocks=1 00:05:09.859 00:05:09.859 ' 00:05:09.859 16:59:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:09.859 16:59:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.859 16:59:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.859 16:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.859 ************************************ 00:05:09.859 START TEST nvmf_target_core 00:05:09.859 ************************************ 00:05:09.859 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.859 * Looking for test storage... 00:05:10.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.118 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.119 --rc genhtml_branch_coverage=1 00:05:10.119 --rc genhtml_function_coverage=1 00:05:10.119 --rc genhtml_legend=1 00:05:10.119 --rc geninfo_all_blocks=1 00:05:10.119 --rc geninfo_unexecuted_blocks=1 00:05:10.119 00:05:10.119 ' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.119 --rc genhtml_branch_coverage=1 00:05:10.119 --rc genhtml_function_coverage=1 00:05:10.119 --rc genhtml_legend=1 00:05:10.119 --rc geninfo_all_blocks=1 00:05:10.119 --rc geninfo_unexecuted_blocks=1 00:05:10.119 00:05:10.119 ' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.119 --rc genhtml_branch_coverage=1 00:05:10.119 --rc genhtml_function_coverage=1 00:05:10.119 --rc genhtml_legend=1 00:05:10.119 --rc geninfo_all_blocks=1 00:05:10.119 --rc geninfo_unexecuted_blocks=1 00:05:10.119 00:05:10.119 ' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.119 --rc genhtml_branch_coverage=1 00:05:10.119 --rc genhtml_function_coverage=1 00:05:10.119 --rc genhtml_legend=1 00:05:10.119 --rc geninfo_all_blocks=1 00:05:10.119 --rc geninfo_unexecuted_blocks=1 00:05:10.119 00:05:10.119 ' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:10.119 16:59:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:10.119 ************************************ 00:05:10.119 START TEST nvmf_abort 00:05:10.119 ************************************ 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.119 * Looking for test storage... 00:05:10.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.119 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.379 --rc genhtml_branch_coverage=1 00:05:10.379 --rc genhtml_function_coverage=1 00:05:10.379 --rc genhtml_legend=1 00:05:10.379 --rc geninfo_all_blocks=1 00:05:10.379 --rc geninfo_unexecuted_blocks=1 00:05:10.379 00:05:10.379 ' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.379 --rc genhtml_branch_coverage=1 00:05:10.379 --rc genhtml_function_coverage=1 00:05:10.379 --rc genhtml_legend=1 00:05:10.379 --rc geninfo_all_blocks=1 00:05:10.379 --rc geninfo_unexecuted_blocks=1 00:05:10.379 00:05:10.379 ' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.379 --rc genhtml_branch_coverage=1 00:05:10.379 --rc genhtml_function_coverage=1 00:05:10.379 --rc genhtml_legend=1 00:05:10.379 --rc geninfo_all_blocks=1 00:05:10.379 --rc geninfo_unexecuted_blocks=1 00:05:10.379 00:05:10.379 ' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.379 --rc genhtml_branch_coverage=1 00:05:10.379 --rc genhtml_function_coverage=1 00:05:10.379 --rc genhtml_legend=1 00:05:10.379 --rc geninfo_all_blocks=1 00:05:10.379 --rc geninfo_unexecuted_blocks=1 00:05:10.379 00:05:10.379 ' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.379 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:10.380 16:59:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:16.945 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:16.945 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.945 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:16.946 Found net devices under 0000:86:00.0: cvl_0_0 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:16.946 Found net devices under 0000:86:00.1: cvl_0_1 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.946 16:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:05:16.946 00:05:16.946 --- 10.0.0.2 ping statistics --- 00:05:16.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.946 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:05:16.946 00:05:16.946 --- 10.0.0.1 ping statistics --- 00:05:16.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.946 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2316843 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2316843 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2316843 ']' 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.946 16:59:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.946 [2024-11-20 16:59:34.350291] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:16.946 [2024-11-20 16:59:34.350344] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.946 [2024-11-20 16:59:34.428708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.946 [2024-11-20 16:59:34.472875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.946 [2024-11-20 16:59:34.472906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.946 [2024-11-20 16:59:34.472913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.946 [2024-11-20 16:59:34.472919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.946 [2024-11-20 16:59:34.472924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.946 [2024-11-20 16:59:34.474160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.946 [2024-11-20 16:59:34.474267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.946 [2024-11-20 16:59:34.474267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 [2024-11-20 16:59:35.221285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.204 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 Malloc0 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 Delay0 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 [2024-11-20 16:59:35.306010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.462 16:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:17.462 [2024-11-20 16:59:35.401838] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.991 Initializing NVMe Controllers 00:05:19.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.991 controller IO queue size 128 less than required 00:05:19.991 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.991 Initialization complete. Launching workers. 00:05:19.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37072 00:05:19.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37133, failed to submit 62 00:05:19.991 success 37076, unsuccessful 57, failed 0 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.991 rmmod nvme_tcp 00:05:19.991 rmmod nvme_fabrics 00:05:19.991 rmmod nvme_keyring 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2316843 ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2316843 ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2316843' 00:05:19.991 killing process with pid 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2316843 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.991 16:59:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.897 00:05:21.897 real 0m11.818s 00:05:21.897 user 0m13.558s 00:05:21.897 sys 0m5.380s 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.897 ************************************ 00:05:21.897 END TEST nvmf_abort 00:05:21.897 ************************************ 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.897 16:59:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:22.159 ************************************ 00:05:22.159 START TEST nvmf_ns_hotplug_stress 00:05:22.159 ************************************ 00:05:22.159 16:59:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:22.159 * Looking for test storage... 00:05:22.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:22.159 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.160 --rc genhtml_branch_coverage=1 00:05:22.160 --rc genhtml_function_coverage=1 00:05:22.160 --rc genhtml_legend=1 00:05:22.160 --rc geninfo_all_blocks=1 00:05:22.160 --rc geninfo_unexecuted_blocks=1 00:05:22.160 00:05:22.160 ' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.160 --rc genhtml_branch_coverage=1 00:05:22.160 --rc genhtml_function_coverage=1 00:05:22.160 --rc genhtml_legend=1 00:05:22.160 --rc geninfo_all_blocks=1 00:05:22.160 --rc geninfo_unexecuted_blocks=1 00:05:22.160 00:05:22.160 ' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.160 --rc genhtml_branch_coverage=1 00:05:22.160 --rc genhtml_function_coverage=1 00:05:22.160 --rc genhtml_legend=1 00:05:22.160 --rc geninfo_all_blocks=1 00:05:22.160 --rc geninfo_unexecuted_blocks=1 00:05:22.160 00:05:22.160 ' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.160 --rc genhtml_branch_coverage=1 00:05:22.160 --rc genhtml_function_coverage=1 00:05:22.160 --rc genhtml_legend=1 00:05:22.160 --rc geninfo_all_blocks=1 00:05:22.160 --rc geninfo_unexecuted_blocks=1 00:05:22.160 00:05:22.160 ' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:22.160 16:59:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:28.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:28.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:28.799 Found net devices under 0000:86:00.0: cvl_0_0 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:28.799 Found net devices under 0000:86:00.1: cvl_0_1 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.799 16:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.799 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:05:28.800 00:05:28.800 --- 10.0.0.2 ping statistics --- 00:05:28.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.800 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:05:28.800 00:05:28.800 --- 10.0.0.1 ping statistics --- 00:05:28.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.800 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2321102 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2321102 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2321102 ']' 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.800 16:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.800 [2024-11-20 16:59:46.231090] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:05:28.800 [2024-11-20 16:59:46.231134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.800 [2024-11-20 16:59:46.311004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.800 [2024-11-20 16:59:46.352007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.800 [2024-11-20 16:59:46.352039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.800 [2024-11-20 16:59:46.352046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.800 [2024-11-20 16:59:46.352052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.800 [2024-11-20 16:59:46.352057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.800 [2024-11-20 16:59:46.353486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.800 [2024-11-20 16:59:46.353592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.800 [2024-11-20 16:59:46.353593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.058 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.058 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:29.058 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.058 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.058 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.318 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.318 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:29.318 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:29.318 [2024-11-20 16:59:47.277368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.318 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:29.576 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:29.835 [2024-11-20 16:59:47.682776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:29.835 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:30.094 16:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:30.094 Malloc0 00:05:30.094 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:30.353 Delay0 00:05:30.353 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.611 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:30.870 NULL1 00:05:30.870 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:31.128 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:31.128 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2321595 00:05:31.128 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:31.128 16:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.128 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.386 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:31.386 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:31.645 true 00:05:31.645 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:31.645 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.903 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.903 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:32.161 16:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:32.161 true 00:05:32.161 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:32.161 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.419 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.677 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:32.677 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:32.935 true 00:05:32.935 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:32.935 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.193 16:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.193 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:33.193 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:33.452 true 00:05:33.452 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:33.452 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.711 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.970 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:33.970 16:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:34.228 true 00:05:34.228 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:34.228 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.228 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.486 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:34.486 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:34.744 true 00:05:34.744 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:34.744 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.003 16:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.262 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:35.262 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:35.262 true 00:05:35.520 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:35.520 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.520 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.788 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:35.788 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:36.046 true 00:05:36.046 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:36.046 16:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.303 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.561 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:36.561 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:36.561 true 00:05:36.561 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:36.561 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.819 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.077 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:37.077 16:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:37.335 true 00:05:37.335 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:37.335 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.593 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.593 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:37.593 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:37.851 true 00:05:37.851 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:37.851 16:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.109 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.367 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:38.367 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:38.626 true 00:05:38.626 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:38.626 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.626 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.884 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:38.884 16:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:39.142 true 00:05:39.142 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:39.142 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.401 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.658 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:39.658 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:39.917 true 00:05:39.917 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:39.917 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.917 16:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.176 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:40.176 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:40.435 true 00:05:40.435 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:40.435 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.693 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.952 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:40.952 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:40.952 true 00:05:41.211 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:41.211 16:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.211 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.469 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:41.469 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:41.727 true 00:05:41.727 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:41.727 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.986 16:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.244 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:42.244 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:42.244 true 00:05:42.244 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:42.244 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.502 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.761 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:42.761 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:43.019 true 00:05:43.019 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:43.019 17:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.278 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.536 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:43.536 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:43.536 true 00:05:43.536 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:43.536 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.794 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.052 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:44.052 17:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:44.310 true 00:05:44.310 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:44.310 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.568 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.827 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:44.827 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:44.827 true 00:05:44.827 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:44.827 17:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.087 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.345 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:45.345 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:45.604 true 00:05:45.604 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:45.604 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.604 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.864 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:45.864 17:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:46.122 true 00:05:46.122 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:46.122 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.381 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.640 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:46.640 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:46.640 true 00:05:46.898 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:46.898 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.898 17:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.156 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:47.156 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:47.415 true 00:05:47.415 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:47.415 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.672 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.931 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:47.931 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:47.931 true 00:05:47.931 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:47.931 17:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.189 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.448 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:48.448 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:48.707 true 00:05:48.707 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:48.707 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.966 17:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.225 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:49.225 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:49.225 true 00:05:49.484 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:49.484 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.484 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.743 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:49.743 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:50.001 true 00:05:50.001 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:50.002 17:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.260 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.518 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:50.518 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:50.518 true 00:05:50.518 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:50.518 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.776 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.033 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:51.033 17:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:51.293 true 00:05:51.293 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:51.293 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.552 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.810 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:51.810 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:51.810 true 00:05:51.810 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:51.810 17:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.069 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.329 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:52.329 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:52.588 true 00:05:52.588 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:52.588 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.847 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.105 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:53.105 17:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:53.105 true 00:05:53.105 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:53.105 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.364 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.622 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:53.622 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:53.880 true 00:05:53.880 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:53.880 17:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.138 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.397 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:54.397 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:54.397 true 00:05:54.656 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:54.656 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.656 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.914 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:54.914 17:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:55.172 true 00:05:55.172 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:55.172 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.431 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.690 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:55.690 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:55.690 true 00:05:55.690 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:55.690 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.948 17:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.206 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:56.207 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:56.465 true 00:05:56.465 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:56.465 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.723 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.982 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:56.982 17:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:56.982 true 00:05:56.982 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:56.982 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.240 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.499 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:57.499 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:57.758 true 00:05:57.758 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:57.758 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.017 17:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.276 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:58.276 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:58.276 true 00:05:58.276 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:58.276 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.534 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.793 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:58.793 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:59.052 true 00:05:59.052 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:59.052 17:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.311 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.311 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:59.311 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:59.570 true 00:05:59.570 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:05:59.570 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.829 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.086 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:00.086 17:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:00.343 true 00:06:00.343 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:06:00.343 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.602 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.602 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:00.602 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:00.860 true 00:06:00.860 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:06:00.860 17:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.118 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.375 Initializing NVMe Controllers 00:06:01.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.375 Controller IO queue size 128, less than required. 00:06:01.375 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:01.375 Initialization complete. Launching workers. 00:06:01.375 ======================================================== 00:06:01.375 Latency(us) 00:06:01.375 Device Information : IOPS MiB/s Average min max 00:06:01.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27101.33 13.23 4722.77 2568.82 8673.95 00:06:01.375 ======================================================== 00:06:01.375 Total : 27101.33 13.23 4722.77 2568.82 8673.95 00:06:01.375 00:06:01.375 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:01.375 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:01.634 true 00:06:01.634 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2321595 00:06:01.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2321595) - No such process 00:06:01.634 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2321595 00:06:01.634 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.634 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.892 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:01.892 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:01.892 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:01.892 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.892 17:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:02.152 null0 00:06:02.152 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.152 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.152 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:02.411 null1 00:06:02.411 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.411 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.411 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:02.411 null2 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:02.694 null3 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.694 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:02.960 null4 00:06:02.960 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.960 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.960 17:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:03.246 null5 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:03.246 null6 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.246 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:03.528 null7 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.528 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:03.529 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2327632 2327635 2327639 2327641 2327644 2327648 2327651 2327654 00:06:03.529 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.529 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.529 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.787 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.787 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.787 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.788 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.788 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.788 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.788 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.788 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.047 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.048 17:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.048 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.048 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.048 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.307 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.566 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.824 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.083 17:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.083 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.083 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.083 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.342 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.343 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.601 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.861 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.121 17:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.121 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.121 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.380 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.381 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.640 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.899 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.158 17:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.158 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.417 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.418 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.676 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.676 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.677 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.677 rmmod nvme_tcp 00:06:07.677 rmmod nvme_fabrics 00:06:07.935 rmmod nvme_keyring 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2321102 ']' 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2321102 ']' 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2321102' 00:06:07.935 killing process with pid 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2321102 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.935 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:08.194 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:08.194 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:08.194 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.194 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.194 17:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.102 00:06:10.102 real 0m48.096s 00:06:10.102 user 3m24.170s 00:06:10.102 sys 0m17.275s 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.102 ************************************ 00:06:10.102 END TEST nvmf_ns_hotplug_stress 00:06:10.102 ************************************ 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.102 ************************************ 00:06:10.102 START TEST nvmf_delete_subsystem 00:06:10.102 ************************************ 00:06:10.102 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.362 * Looking for test storage... 00:06:10.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.362 --rc genhtml_branch_coverage=1 00:06:10.362 --rc genhtml_function_coverage=1 00:06:10.362 --rc genhtml_legend=1 00:06:10.362 --rc geninfo_all_blocks=1 00:06:10.362 --rc geninfo_unexecuted_blocks=1 00:06:10.362 00:06:10.362 ' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.362 --rc genhtml_branch_coverage=1 00:06:10.362 --rc genhtml_function_coverage=1 00:06:10.362 --rc genhtml_legend=1 00:06:10.362 --rc geninfo_all_blocks=1 00:06:10.362 --rc geninfo_unexecuted_blocks=1 00:06:10.362 00:06:10.362 ' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.362 --rc genhtml_branch_coverage=1 00:06:10.362 --rc genhtml_function_coverage=1 00:06:10.362 --rc genhtml_legend=1 00:06:10.362 --rc geninfo_all_blocks=1 00:06:10.362 --rc geninfo_unexecuted_blocks=1 00:06:10.362 00:06:10.362 ' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.362 --rc genhtml_branch_coverage=1 00:06:10.362 --rc genhtml_function_coverage=1 00:06:10.362 --rc genhtml_legend=1 00:06:10.362 --rc geninfo_all_blocks=1 00:06:10.362 --rc geninfo_unexecuted_blocks=1 00:06:10.362 00:06:10.362 ' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.362 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.363 17:00:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.941 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.942 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.942 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:06:16.942 00:06:16.942 --- 10.0.0.2 ping statistics --- 00:06:16.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.942 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:06:16.942 00:06:16.942 --- 10.0.0.1 ping statistics --- 00:06:16.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.942 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2332164 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2332164 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2332164 ']' 00:06:16.942 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 [2024-11-20 17:00:34.399318] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:06:16.943 [2024-11-20 17:00:34.399364] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.943 [2024-11-20 17:00:34.480440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.943 [2024-11-20 17:00:34.521506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.943 [2024-11-20 17:00:34.521541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.943 [2024-11-20 17:00:34.521547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.943 [2024-11-20 17:00:34.521553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.943 [2024-11-20 17:00:34.521559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.943 [2024-11-20 17:00:34.522822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.943 [2024-11-20 17:00:34.522823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 [2024-11-20 17:00:34.659779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 [2024-11-20 17:00:34.679985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 NULL1 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 Delay0 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2332190 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:16.943 17:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:16.943 [2024-11-20 17:00:34.790884] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:18.848 17:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.848 17:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.848 17:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.107 Write completed with error (sct=0, sc=8) 00:06:19.107 starting I/O failed: -6 00:06:19.107 Write completed with error (sct=0, sc=8) 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.107 Write completed with error (sct=0, sc=8) 00:06:19.107 Write completed with error (sct=0, sc=8) 00:06:19.107 starting I/O failed: -6 00:06:19.107 Write completed with error (sct=0, sc=8) 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.107 starting I/O failed: -6 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.107 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 [2024-11-20 17:00:36.906115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19664a0 is same with the state(6) to be set 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 starting I/O failed: -6 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 [2024-11-20 17:00:36.910938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb43c000c40 is same with the state(6) to be set 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Read completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.108 Write completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Read completed with error (sct=0, sc=8) 00:06:19.109 Write completed with error (sct=0, sc=8) 00:06:20.043 [2024-11-20 17:00:37.883975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19679a0 is same with the state(6) to be set 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 [2024-11-20 17:00:37.909621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19662c0 is same with the state(6) to be set 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Write completed with error (sct=0, sc=8) 00:06:20.043 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 [2024-11-20 17:00:37.909816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966680 is same with the state(6) to be set 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 [2024-11-20 17:00:37.912788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb43c00d020 is same with the state(6) to be set 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Write completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 Read completed with error (sct=0, sc=8) 00:06:20.044 [2024-11-20 17:00:37.913419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb43c00d7e0 is same with the state(6) to be set 00:06:20.044 Initializing NVMe Controllers 00:06:20.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.044 Controller IO queue size 128, less than required. 00:06:20.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:20.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:20.044 Initialization complete. Launching workers. 00:06:20.044 ======================================================== 00:06:20.044 Latency(us) 00:06:20.044 Device Information : IOPS MiB/s Average min max 00:06:20.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.78 0.08 902058.58 280.07 1006012.52 00:06:20.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.82 0.08 1020183.62 228.58 2001982.60 00:06:20.044 ======================================================== 00:06:20.044 Total : 325.60 0.16 959314.91 228.58 2001982.60 00:06:20.044 00:06:20.044 [2024-11-20 17:00:37.913931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19679a0 (9): Bad file descriptor 00:06:20.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:20.044 17:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.044 17:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:20.044 17:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2332190 00:06:20.044 17:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2332190 00:06:20.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2332190) - No such process 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2332190 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2332190 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2332190 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.612 [2024-11-20 17:00:38.439894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2332883 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:20.612 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.612 [2024-11-20 17:00:38.532786] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:21.180 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.180 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:21.180 17:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.438 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.438 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:21.438 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.005 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.005 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:22.005 17:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.575 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.575 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:22.575 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.142 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.142 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:23.142 17:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.710 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.710 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:23.710 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.710 Initializing NVMe Controllers 00:06:23.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.710 Controller IO queue size 128, less than required. 00:06:23.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:23.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:23.710 Initialization complete. Launching workers. 00:06:23.710 ======================================================== 00:06:23.710 Latency(us) 00:06:23.710 Device Information : IOPS MiB/s Average min max 00:06:23.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002204.32 1000134.81 1041626.79 00:06:23.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004917.83 1000139.54 1042172.96 00:06:23.710 ======================================================== 00:06:23.710 Total : 256.00 0.12 1003561.07 1000134.81 1042172.96 00:06:23.710 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2332883 00:06:23.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2332883) - No such process 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2332883 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.971 17:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.971 rmmod nvme_tcp 00:06:23.971 rmmod nvme_fabrics 00:06:24.230 rmmod nvme_keyring 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2332164 ']' 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2332164 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2332164 ']' 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2332164 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332164 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332164' 00:06:24.230 killing process with pid 2332164 00:06:24.230 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2332164 00:06:24.231 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2332164 00:06:24.231 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.231 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.231 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.231 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.490 17:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.395 00:06:26.395 real 0m16.222s 00:06:26.395 user 0m29.271s 00:06:26.395 sys 0m5.464s 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.395 ************************************ 00:06:26.395 END TEST nvmf_delete_subsystem 00:06:26.395 ************************************ 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.395 ************************************ 00:06:26.395 START TEST nvmf_host_management 00:06:26.395 ************************************ 00:06:26.395 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.654 * Looking for test storage... 00:06:26.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.654 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.654 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.655 --rc genhtml_branch_coverage=1 00:06:26.655 --rc genhtml_function_coverage=1 00:06:26.655 --rc genhtml_legend=1 00:06:26.655 --rc geninfo_all_blocks=1 00:06:26.655 --rc geninfo_unexecuted_blocks=1 00:06:26.655 00:06:26.655 ' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.655 --rc genhtml_branch_coverage=1 00:06:26.655 --rc genhtml_function_coverage=1 00:06:26.655 --rc genhtml_legend=1 00:06:26.655 --rc geninfo_all_blocks=1 00:06:26.655 --rc geninfo_unexecuted_blocks=1 00:06:26.655 00:06:26.655 ' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.655 --rc genhtml_branch_coverage=1 00:06:26.655 --rc genhtml_function_coverage=1 00:06:26.655 --rc genhtml_legend=1 00:06:26.655 --rc geninfo_all_blocks=1 00:06:26.655 --rc geninfo_unexecuted_blocks=1 00:06:26.655 00:06:26.655 ' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.655 --rc genhtml_branch_coverage=1 00:06:26.655 --rc genhtml_function_coverage=1 00:06:26.655 --rc genhtml_legend=1 00:06:26.655 --rc geninfo_all_blocks=1 00:06:26.655 --rc geninfo_unexecuted_blocks=1 00:06:26.655 00:06:26.655 ' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.655 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.656 17:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:33.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:33.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:33.229 Found net devices under 0000:86:00.0: cvl_0_0 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:33.229 Found net devices under 0000:86:00.1: cvl_0_1 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.229 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:06:33.230 00:06:33.230 --- 10.0.0.2 ping statistics --- 00:06:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.230 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:06:33.230 00:06:33.230 --- 10.0.0.1 ping statistics --- 00:06:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.230 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2337028 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2337028 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2337028 ']' 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 [2024-11-20 17:00:50.780190] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:06:33.230 [2024-11-20 17:00:50.780253] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.230 [2024-11-20 17:00:50.860967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.230 [2024-11-20 17:00:50.904130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.230 [2024-11-20 17:00:50.904165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.230 [2024-11-20 17:00:50.904172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.230 [2024-11-20 17:00:50.904180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.230 [2024-11-20 17:00:50.904185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.230 [2024-11-20 17:00:50.905745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.230 [2024-11-20 17:00:50.905852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.230 [2024-11-20 17:00:50.905961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.230 [2024-11-20 17:00:50.905963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.230 17:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 [2024-11-20 17:00:51.042515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 Malloc0 00:06:33.230 [2024-11-20 17:00:51.112902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2337158 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2337158 /var/tmp/bdevperf.sock 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2337158 ']' 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.230 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:33.231 { 00:06:33.231 "params": { 00:06:33.231 "name": "Nvme$subsystem", 00:06:33.231 "trtype": "$TEST_TRANSPORT", 00:06:33.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.231 "adrfam": "ipv4", 00:06:33.231 "trsvcid": "$NVMF_PORT", 00:06:33.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.231 "hdgst": ${hdgst:-false}, 00:06:33.231 "ddgst": ${ddgst:-false} 00:06:33.231 }, 00:06:33.231 "method": "bdev_nvme_attach_controller" 00:06:33.231 } 00:06:33.231 EOF 00:06:33.231 )") 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:33.231 17:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:33.231 "params": { 00:06:33.231 "name": "Nvme0", 00:06:33.231 "trtype": "tcp", 00:06:33.231 "traddr": "10.0.0.2", 00:06:33.231 "adrfam": "ipv4", 00:06:33.231 "trsvcid": "4420", 00:06:33.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.231 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.231 "hdgst": false, 00:06:33.231 "ddgst": false 00:06:33.231 }, 00:06:33.231 "method": "bdev_nvme_attach_controller" 00:06:33.231 }' 00:06:33.231 [2024-11-20 17:00:51.209707] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:06:33.231 [2024-11-20 17:00:51.209753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337158 ] 00:06:33.490 [2024-11-20 17:00:51.286565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.490 [2024-11-20 17:00:51.329120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.750 Running I/O for 10 seconds... 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=930 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 930 -ge 100 ']' 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.319 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.319 [2024-11-20 17:00:52.128414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.319 [2024-11-20 17:00:52.128755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.319 [2024-11-20 17:00:52.128761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.128991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.128999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.320 [2024-11-20 17:00:52.129327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.320 [2024-11-20 17:00:52.129334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.321 [2024-11-20 17:00:52.129349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.321 [2024-11-20 17:00:52.129364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.321 [2024-11-20 17:00:52.129378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.321 [2024-11-20 17:00:52.129393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.321 [2024-11-20 17:00:52.129498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.321 [2024-11-20 17:00:52.129513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.321 [2024-11-20 17:00:52.129527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:34.321 [2024-11-20 17:00:52.129540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.129547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd70500 is same with the state(6) to be set 00:06:34.321 [2024-11-20 17:00:52.130412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:34.321 task offset: 896 on job bdev=Nvme0n1 fails 00:06:34.321 00:06:34.321 Latency(us) 00:06:34.321 [2024-11-20T16:00:52.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:34.321 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:34.321 Job: Nvme0n1 ended in about 0.52 seconds with error 00:06:34.321 Verification LBA range: start 0x0 length 0x400 00:06:34.321 Nvme0n1 : 0.52 1981.37 123.84 123.84 0.00 29719.07 1552.58 26963.38 00:06:34.321 [2024-11-20T16:00:52.364Z] =================================================================================================================== 00:06:34.321 [2024-11-20T16:00:52.364Z] Total : 1981.37 123.84 123.84 0.00 29719.07 1552.58 26963.38 00:06:34.321 [2024-11-20 17:00:52.132763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.321 [2024-11-20 17:00:52.132784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd70500 (9): Bad file descriptor 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.321 [2024-11-20 17:00:52.135954] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:34.321 [2024-11-20 17:00:52.136025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:34.321 [2024-11-20 17:00:52.136049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:34.321 [2024-11-20 17:00:52.136064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:34.321 [2024-11-20 17:00:52.136072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:34.321 [2024-11-20 17:00:52.136079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:34.321 [2024-11-20 17:00:52.136085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd70500 00:06:34.321 [2024-11-20 17:00:52.136103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd70500 (9): Bad file descriptor 00:06:34.321 [2024-11-20 17:00:52.136114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:34.321 [2024-11-20 17:00:52.136121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:34.321 [2024-11-20 17:00:52.136130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:34.321 [2024-11-20 17:00:52.136139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.321 17:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2337158 00:06:35.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2337158) - No such process 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:35.257 { 00:06:35.257 "params": { 00:06:35.257 "name": "Nvme$subsystem", 00:06:35.257 "trtype": "$TEST_TRANSPORT", 00:06:35.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:35.257 "adrfam": "ipv4", 00:06:35.257 "trsvcid": "$NVMF_PORT", 00:06:35.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:35.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:35.257 "hdgst": ${hdgst:-false}, 00:06:35.257 "ddgst": ${ddgst:-false} 00:06:35.257 }, 00:06:35.257 "method": "bdev_nvme_attach_controller" 00:06:35.257 } 00:06:35.257 EOF 00:06:35.257 )") 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:35.257 17:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:35.257 "params": { 00:06:35.257 "name": "Nvme0", 00:06:35.257 "trtype": "tcp", 00:06:35.257 "traddr": "10.0.0.2", 00:06:35.257 "adrfam": "ipv4", 00:06:35.257 "trsvcid": "4420", 00:06:35.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:35.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:35.257 "hdgst": false, 00:06:35.257 "ddgst": false 00:06:35.257 }, 00:06:35.257 "method": "bdev_nvme_attach_controller" 00:06:35.257 }' 00:06:35.257 [2024-11-20 17:00:53.202777] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:06:35.257 [2024-11-20 17:00:53.202823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337408 ] 00:06:35.257 [2024-11-20 17:00:53.280133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.516 [2024-11-20 17:00:53.318899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.516 Running I/O for 1 seconds... 00:06:36.799 2048.00 IOPS, 128.00 MiB/s 00:06:36.799 Latency(us) 00:06:36.799 [2024-11-20T16:00:54.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.799 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:36.799 Verification LBA range: start 0x0 length 0x400 00:06:36.799 Nvme0n1 : 1.02 2063.56 128.97 0.00 0.00 30528.81 4681.14 26963.38 00:06:36.799 [2024-11-20T16:00:54.842Z] =================================================================================================================== 00:06:36.799 [2024-11-20T16:00:54.842Z] Total : 2063.56 128.97 0.00 0.00 30528.81 4681.14 26963.38 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.799 rmmod nvme_tcp 00:06:36.799 rmmod nvme_fabrics 00:06:36.799 rmmod nvme_keyring 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2337028 ']' 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2337028 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2337028 ']' 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2337028 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337028 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337028' 00:06:36.799 killing process with pid 2337028 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2337028 00:06:36.799 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2337028 00:06:37.059 [2024-11-20 17:00:54.954148] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.059 17:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:39.597 00:06:39.597 real 0m12.631s 00:06:39.597 user 0m20.367s 00:06:39.597 sys 0m5.756s 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.597 ************************************ 00:06:39.597 END TEST nvmf_host_management 00:06:39.597 ************************************ 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.597 ************************************ 00:06:39.597 START TEST nvmf_lvol 00:06:39.597 ************************************ 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:39.597 * Looking for test storage... 00:06:39.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.597 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.598 --rc genhtml_branch_coverage=1 00:06:39.598 --rc genhtml_function_coverage=1 00:06:39.598 --rc genhtml_legend=1 00:06:39.598 --rc geninfo_all_blocks=1 00:06:39.598 --rc geninfo_unexecuted_blocks=1 00:06:39.598 00:06:39.598 ' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.598 --rc genhtml_branch_coverage=1 00:06:39.598 --rc genhtml_function_coverage=1 00:06:39.598 --rc genhtml_legend=1 00:06:39.598 --rc geninfo_all_blocks=1 00:06:39.598 --rc geninfo_unexecuted_blocks=1 00:06:39.598 00:06:39.598 ' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.598 --rc genhtml_branch_coverage=1 00:06:39.598 --rc genhtml_function_coverage=1 00:06:39.598 --rc genhtml_legend=1 00:06:39.598 --rc geninfo_all_blocks=1 00:06:39.598 --rc geninfo_unexecuted_blocks=1 00:06:39.598 00:06:39.598 ' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.598 --rc genhtml_branch_coverage=1 00:06:39.598 --rc genhtml_function_coverage=1 00:06:39.598 --rc genhtml_legend=1 00:06:39.598 --rc geninfo_all_blocks=1 00:06:39.598 --rc geninfo_unexecuted_blocks=1 00:06:39.598 00:06:39.598 ' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:39.598 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:39.599 17:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.170 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:46.171 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:46.171 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:46.171 Found net devices under 0000:86:00.0: cvl_0_0 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:46.171 Found net devices under 0000:86:00.1: cvl_0_1 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.171 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:06:46.172 00:06:46.172 --- 10.0.0.2 ping statistics --- 00:06:46.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.172 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:06:46.172 00:06:46.172 --- 10.0.0.1 ping statistics --- 00:06:46.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.172 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2341367 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2341367 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2341367 ']' 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.172 [2024-11-20 17:01:03.453086] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:06:46.172 [2024-11-20 17:01:03.453131] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.172 [2024-11-20 17:01:03.532001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.172 [2024-11-20 17:01:03.573004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.172 [2024-11-20 17:01:03.573039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.172 [2024-11-20 17:01:03.573046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.172 [2024-11-20 17:01:03.573052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.172 [2024-11-20 17:01:03.573058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.172 [2024-11-20 17:01:03.574435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.172 [2024-11-20 17:01:03.574547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.172 [2024-11-20 17:01:03.574548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.172 [2024-11-20 17:01:03.876103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.172 17:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.172 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:46.172 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.431 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.431 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:46.690 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:46.949 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ec05f161-53dd-40b3-a47d-13d4c5470efd 00:06:46.949 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec05f161-53dd-40b3-a47d-13d4c5470efd lvol 20 00:06:46.949 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f7c97a6c-99ac-4fdd-89d1-e0368f8b2f2e 00:06:46.949 17:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.208 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7c97a6c-99ac-4fdd-89d1-e0368f8b2f2e 00:06:47.468 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.726 [2024-11-20 17:01:05.522392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.727 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.727 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2341674 00:06:47.727 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:47.727 17:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:49.105 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f7c97a6c-99ac-4fdd-89d1-e0368f8b2f2e MY_SNAPSHOT 00:06:49.106 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2c92a3f5-7900-4e64-a8f3-75255767ad3a 00:06:49.106 17:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f7c97a6c-99ac-4fdd-89d1-e0368f8b2f2e 30 00:06:49.364 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2c92a3f5-7900-4e64-a8f3-75255767ad3a MY_CLONE 00:06:49.622 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=700eb65c-b67e-484c-9c1b-12daf5abdbf1 00:06:49.622 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 700eb65c-b67e-484c-9c1b-12daf5abdbf1 00:06:50.191 17:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2341674 00:06:58.311 Initializing NVMe Controllers 00:06:58.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:58.311 Controller IO queue size 128, less than required. 00:06:58.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:58.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:58.311 Initialization complete. Launching workers. 00:06:58.311 ======================================================== 00:06:58.311 Latency(us) 00:06:58.311 Device Information : IOPS MiB/s Average min max 00:06:58.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12401.05 48.44 10327.75 1563.28 96510.88 00:06:58.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12269.75 47.93 10434.24 3510.81 48896.61 00:06:58.311 ======================================================== 00:06:58.311 Total : 24670.80 96.37 10380.71 1563.28 96510.88 00:06:58.311 00:06:58.311 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.570 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7c97a6c-99ac-4fdd-89d1-e0368f8b2f2e 00:06:58.570 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec05f161-53dd-40b3-a47d-13d4c5470efd 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.828 rmmod nvme_tcp 00:06:58.828 rmmod nvme_fabrics 00:06:58.828 rmmod nvme_keyring 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2341367 ']' 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2341367 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2341367 ']' 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2341367 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:58.828 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.829 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341367 00:06:59.087 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.087 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.087 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341367' 00:06:59.087 killing process with pid 2341367 00:06:59.087 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2341367 00:06:59.087 17:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2341367 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.087 17:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.625 00:07:01.625 real 0m22.034s 00:07:01.625 user 1m3.145s 00:07:01.625 sys 0m7.653s 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.625 ************************************ 00:07:01.625 END TEST nvmf_lvol 00:07:01.625 ************************************ 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.625 ************************************ 00:07:01.625 START TEST nvmf_lvs_grow 00:07:01.625 ************************************ 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:01.625 * Looking for test storage... 00:07:01.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.625 --rc genhtml_branch_coverage=1 00:07:01.625 --rc genhtml_function_coverage=1 00:07:01.625 --rc genhtml_legend=1 00:07:01.625 --rc geninfo_all_blocks=1 00:07:01.625 --rc geninfo_unexecuted_blocks=1 00:07:01.625 00:07:01.625 ' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.625 --rc genhtml_branch_coverage=1 00:07:01.625 --rc genhtml_function_coverage=1 00:07:01.625 --rc genhtml_legend=1 00:07:01.625 --rc geninfo_all_blocks=1 00:07:01.625 --rc geninfo_unexecuted_blocks=1 00:07:01.625 00:07:01.625 ' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.625 --rc genhtml_branch_coverage=1 00:07:01.625 --rc genhtml_function_coverage=1 00:07:01.625 --rc genhtml_legend=1 00:07:01.625 --rc geninfo_all_blocks=1 00:07:01.625 --rc geninfo_unexecuted_blocks=1 00:07:01.625 00:07:01.625 ' 00:07:01.625 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.625 --rc genhtml_branch_coverage=1 00:07:01.625 --rc genhtml_function_coverage=1 00:07:01.625 --rc genhtml_legend=1 00:07:01.625 --rc geninfo_all_blocks=1 00:07:01.625 --rc geninfo_unexecuted_blocks=1 00:07:01.625 00:07:01.625 ' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.626 17:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.200 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.200 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.200 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:07:08.200 00:07:08.200 --- 10.0.0.2 ping statistics --- 00:07:08.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.200 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:08.201 00:07:08.201 --- 10.0.0.1 ping statistics --- 00:07:08.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.201 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2347167 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2347167 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2347167 ']' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.201 [2024-11-20 17:01:25.501889] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:08.201 [2024-11-20 17:01:25.501938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.201 [2024-11-20 17:01:25.578570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.201 [2024-11-20 17:01:25.620158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.201 [2024-11-20 17:01:25.620193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.201 [2024-11-20 17:01:25.620200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.201 [2024-11-20 17:01:25.620212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.201 [2024-11-20 17:01:25.620217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.201 [2024-11-20 17:01:25.620779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:08.201 [2024-11-20 17:01:25.913442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.201 ************************************ 00:07:08.201 START TEST lvs_grow_clean 00:07:08.201 ************************************ 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.201 17:01:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.201 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:08.201 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:08.460 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a504f68-58c2-48df-ada0-15c982325171 00:07:08.460 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:08.460 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a504f68-58c2-48df-ada0-15c982325171 lvol 150 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.719 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.978 [2024-11-20 17:01:26.920077] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.978 [2024-11-20 17:01:26.920123] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.978 true 00:07:08.978 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:08.978 17:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:09.237 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:09.237 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.497 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 00:07:09.497 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.757 [2024-11-20 17:01:27.646280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.757 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2347564 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2347564 /var/tmp/bdevperf.sock 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2347564 ']' 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.016 17:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.016 [2024-11-20 17:01:27.878382] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:10.016 [2024-11-20 17:01:27.878427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347564 ] 00:07:10.016 [2024-11-20 17:01:27.952610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.016 [2024-11-20 17:01:27.992431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.274 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.274 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:10.274 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:10.533 Nvme0n1 00:07:10.533 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:10.792 [ 00:07:10.792 { 00:07:10.792 "name": "Nvme0n1", 00:07:10.792 "aliases": [ 00:07:10.792 "003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7" 00:07:10.792 ], 00:07:10.792 "product_name": "NVMe disk", 00:07:10.792 "block_size": 4096, 00:07:10.792 "num_blocks": 38912, 00:07:10.792 "uuid": "003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7", 00:07:10.792 "numa_id": 1, 00:07:10.792 "assigned_rate_limits": { 00:07:10.792 "rw_ios_per_sec": 0, 00:07:10.792 "rw_mbytes_per_sec": 0, 00:07:10.792 "r_mbytes_per_sec": 0, 00:07:10.792 "w_mbytes_per_sec": 0 00:07:10.792 }, 00:07:10.792 "claimed": false, 00:07:10.792 "zoned": false, 00:07:10.792 "supported_io_types": { 00:07:10.792 "read": true, 00:07:10.792 "write": true, 00:07:10.792 "unmap": true, 00:07:10.792 "flush": true, 00:07:10.792 "reset": true, 00:07:10.792 "nvme_admin": true, 00:07:10.792 "nvme_io": true, 00:07:10.792 "nvme_io_md": false, 00:07:10.792 "write_zeroes": true, 00:07:10.792 "zcopy": false, 00:07:10.792 "get_zone_info": false, 00:07:10.792 "zone_management": false, 00:07:10.792 "zone_append": false, 00:07:10.792 "compare": true, 00:07:10.792 "compare_and_write": true, 00:07:10.792 "abort": true, 00:07:10.792 "seek_hole": false, 00:07:10.792 "seek_data": false, 00:07:10.792 "copy": true, 00:07:10.792 "nvme_iov_md": false 00:07:10.792 }, 00:07:10.792 "memory_domains": [ 00:07:10.792 { 00:07:10.792 "dma_device_id": "system", 00:07:10.792 "dma_device_type": 1 00:07:10.792 } 00:07:10.792 ], 00:07:10.792 "driver_specific": { 00:07:10.792 "nvme": [ 00:07:10.792 { 00:07:10.792 "trid": { 00:07:10.792 "trtype": "TCP", 00:07:10.792 "adrfam": "IPv4", 00:07:10.792 "traddr": "10.0.0.2", 00:07:10.792 "trsvcid": "4420", 00:07:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:10.792 }, 00:07:10.792 "ctrlr_data": { 00:07:10.792 "cntlid": 1, 00:07:10.792 "vendor_id": "0x8086", 00:07:10.792 "model_number": "SPDK bdev Controller", 00:07:10.792 "serial_number": "SPDK0", 00:07:10.792 "firmware_revision": "25.01", 00:07:10.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.792 "oacs": { 00:07:10.792 "security": 0, 00:07:10.792 "format": 0, 00:07:10.792 "firmware": 0, 00:07:10.792 "ns_manage": 0 00:07:10.792 }, 00:07:10.792 "multi_ctrlr": true, 00:07:10.792 "ana_reporting": false 00:07:10.792 }, 00:07:10.792 "vs": { 00:07:10.792 "nvme_version": "1.3" 00:07:10.792 }, 00:07:10.792 "ns_data": { 00:07:10.792 "id": 1, 00:07:10.792 "can_share": true 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ], 00:07:10.792 "mp_policy": "active_passive" 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:10.792 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2347790 00:07:10.792 17:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:10.792 Running I/O for 10 seconds... 00:07:11.737 Latency(us) 00:07:11.737 [2024-11-20T16:01:29.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.737 Nvme0n1 : 1.00 23316.00 91.08 0.00 0.00 0.00 0.00 0.00 00:07:11.737 [2024-11-20T16:01:29.780Z] =================================================================================================================== 00:07:11.737 [2024-11-20T16:01:29.780Z] Total : 23316.00 91.08 0.00 0.00 0.00 0.00 0.00 00:07:11.737 00:07:12.673 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:12.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.931 Nvme0n1 : 2.00 23222.50 90.71 0.00 0.00 0.00 0.00 0.00 00:07:12.931 [2024-11-20T16:01:30.974Z] =================================================================================================================== 00:07:12.931 [2024-11-20T16:01:30.974Z] Total : 23222.50 90.71 0.00 0.00 0.00 0.00 0.00 00:07:12.931 00:07:12.931 true 00:07:12.931 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.931 17:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:13.190 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:13.190 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:13.190 17:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2347790 00:07:13.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.757 Nvme0n1 : 3.00 23266.67 90.89 0.00 0.00 0.00 0.00 0.00 00:07:13.757 [2024-11-20T16:01:31.800Z] =================================================================================================================== 00:07:13.757 [2024-11-20T16:01:31.800Z] Total : 23266.67 90.89 0.00 0.00 0.00 0.00 0.00 00:07:13.757 00:07:15.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.133 Nvme0n1 : 4.00 23373.50 91.30 0.00 0.00 0.00 0.00 0.00 00:07:15.133 [2024-11-20T16:01:33.176Z] =================================================================================================================== 00:07:15.133 [2024-11-20T16:01:33.176Z] Total : 23373.50 91.30 0.00 0.00 0.00 0.00 0.00 00:07:15.133 00:07:16.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.068 Nvme0n1 : 5.00 23442.80 91.57 0.00 0.00 0.00 0.00 0.00 00:07:16.068 [2024-11-20T16:01:34.111Z] =================================================================================================================== 00:07:16.068 [2024-11-20T16:01:34.111Z] Total : 23442.80 91.57 0.00 0.00 0.00 0.00 0.00 00:07:16.068 00:07:17.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.003 Nvme0n1 : 6.00 23505.17 91.82 0.00 0.00 0.00 0.00 0.00 00:07:17.003 [2024-11-20T16:01:35.046Z] =================================================================================================================== 00:07:17.003 [2024-11-20T16:01:35.046Z] Total : 23505.17 91.82 0.00 0.00 0.00 0.00 0.00 00:07:17.003 00:07:17.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.938 Nvme0n1 : 7.00 23541.43 91.96 0.00 0.00 0.00 0.00 0.00 00:07:17.938 [2024-11-20T16:01:35.981Z] =================================================================================================================== 00:07:17.938 [2024-11-20T16:01:35.982Z] Total : 23541.43 91.96 0.00 0.00 0.00 0.00 0.00 00:07:17.939 00:07:18.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.874 Nvme0n1 : 8.00 23543.75 91.97 0.00 0.00 0.00 0.00 0.00 00:07:18.874 [2024-11-20T16:01:36.917Z] =================================================================================================================== 00:07:18.874 [2024-11-20T16:01:36.917Z] Total : 23543.75 91.97 0.00 0.00 0.00 0.00 0.00 00:07:18.874 00:07:19.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.810 Nvme0n1 : 9.00 23569.11 92.07 0.00 0.00 0.00 0.00 0.00 00:07:19.810 [2024-11-20T16:01:37.853Z] =================================================================================================================== 00:07:19.810 [2024-11-20T16:01:37.853Z] Total : 23569.11 92.07 0.00 0.00 0.00 0.00 0.00 00:07:19.810 00:07:20.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.743 Nvme0n1 : 10.00 23583.60 92.12 0.00 0.00 0.00 0.00 0.00 00:07:20.743 [2024-11-20T16:01:38.786Z] =================================================================================================================== 00:07:20.743 [2024-11-20T16:01:38.786Z] Total : 23583.60 92.12 0.00 0.00 0.00 0.00 0.00 00:07:20.743 00:07:20.743 00:07:20.743 Latency(us) 00:07:20.743 [2024-11-20T16:01:38.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.743 Nvme0n1 : 10.00 23587.82 92.14 0.00 0.00 5423.47 1646.20 10485.76 00:07:20.743 [2024-11-20T16:01:38.786Z] =================================================================================================================== 00:07:20.743 [2024-11-20T16:01:38.786Z] Total : 23587.82 92.14 0.00 0.00 5423.47 1646.20 10485.76 00:07:20.743 { 00:07:20.743 "results": [ 00:07:20.743 { 00:07:20.744 "job": "Nvme0n1", 00:07:20.744 "core_mask": "0x2", 00:07:20.744 "workload": "randwrite", 00:07:20.744 "status": "finished", 00:07:20.744 "queue_depth": 128, 00:07:20.744 "io_size": 4096, 00:07:20.744 "runtime": 10.003637, 00:07:20.744 "iops": 23587.82110946249, 00:07:20.744 "mibps": 92.13992620883785, 00:07:20.744 "io_failed": 0, 00:07:20.744 "io_timeout": 0, 00:07:20.744 "avg_latency_us": 5423.47369146706, 00:07:20.744 "min_latency_us": 1646.2019047619049, 00:07:20.744 "max_latency_us": 10485.76 00:07:20.744 } 00:07:20.744 ], 00:07:20.744 "core_count": 1 00:07:20.744 } 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2347564 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2347564 ']' 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2347564 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2347564 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2347564' 00:07:21.002 killing process with pid 2347564 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2347564 00:07:21.002 Received shutdown signal, test time was about 10.000000 seconds 00:07:21.002 00:07:21.002 Latency(us) 00:07:21.002 [2024-11-20T16:01:39.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.002 [2024-11-20T16:01:39.045Z] =================================================================================================================== 00:07:21.002 [2024-11-20T16:01:39.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2347564 00:07:21.002 17:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.261 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.519 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:21.519 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.777 [2024-11-20 17:01:39.788770] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:21.777 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:21.778 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.036 17:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:22.036 request: 00:07:22.036 { 00:07:22.036 "uuid": "3a504f68-58c2-48df-ada0-15c982325171", 00:07:22.036 "method": "bdev_lvol_get_lvstores", 00:07:22.036 "req_id": 1 00:07:22.036 } 00:07:22.036 Got JSON-RPC error response 00:07:22.036 response: 00:07:22.036 { 00:07:22.036 "code": -19, 00:07:22.036 "message": "No such device" 00:07:22.036 } 00:07:22.036 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:22.036 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.036 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.036 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.036 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.293 aio_bdev 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.293 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.605 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 -t 2000 00:07:22.605 [ 00:07:22.605 { 00:07:22.605 "name": "003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7", 00:07:22.605 "aliases": [ 00:07:22.605 "lvs/lvol" 00:07:22.605 ], 00:07:22.605 "product_name": "Logical Volume", 00:07:22.605 "block_size": 4096, 00:07:22.605 "num_blocks": 38912, 00:07:22.605 "uuid": "003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7", 00:07:22.605 "assigned_rate_limits": { 00:07:22.605 "rw_ios_per_sec": 0, 00:07:22.605 "rw_mbytes_per_sec": 0, 00:07:22.605 "r_mbytes_per_sec": 0, 00:07:22.605 "w_mbytes_per_sec": 0 00:07:22.605 }, 00:07:22.605 "claimed": false, 00:07:22.605 "zoned": false, 00:07:22.605 "supported_io_types": { 00:07:22.605 "read": true, 00:07:22.605 "write": true, 00:07:22.605 "unmap": true, 00:07:22.605 "flush": false, 00:07:22.605 "reset": true, 00:07:22.605 "nvme_admin": false, 00:07:22.605 "nvme_io": false, 00:07:22.605 "nvme_io_md": false, 00:07:22.605 "write_zeroes": true, 00:07:22.605 "zcopy": false, 00:07:22.605 "get_zone_info": false, 00:07:22.605 "zone_management": false, 00:07:22.605 "zone_append": false, 00:07:22.605 "compare": false, 00:07:22.605 "compare_and_write": false, 00:07:22.605 "abort": false, 00:07:22.605 "seek_hole": true, 00:07:22.605 "seek_data": true, 00:07:22.605 "copy": false, 00:07:22.605 "nvme_iov_md": false 00:07:22.605 }, 00:07:22.605 "driver_specific": { 00:07:22.605 "lvol": { 00:07:22.605 "lvol_store_uuid": "3a504f68-58c2-48df-ada0-15c982325171", 00:07:22.605 "base_bdev": "aio_bdev", 00:07:22.605 "thin_provision": false, 00:07:22.605 "num_allocated_clusters": 38, 00:07:22.605 "snapshot": false, 00:07:22.605 "clone": false, 00:07:22.605 "esnap_clone": false 00:07:22.605 } 00:07:22.605 } 00:07:22.605 } 00:07:22.605 ] 00:07:22.605 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:22.605 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:22.605 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:22.862 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:22.862 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:22.863 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.123 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.123 17:01:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 003a8a2d-12ec-45e9-ae7f-e007a2b8e9e7 00:07:23.123 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a504f68-58c2-48df-ada0-15c982325171 00:07:23.407 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.695 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.695 00:07:23.695 real 0m15.589s 00:07:23.695 user 0m15.198s 00:07:23.695 sys 0m1.426s 00:07:23.695 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.695 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:23.695 ************************************ 00:07:23.695 END TEST lvs_grow_clean 00:07:23.695 ************************************ 00:07:23.695 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:23.695 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.696 ************************************ 00:07:23.696 START TEST lvs_grow_dirty 00:07:23.696 ************************************ 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.696 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.958 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:23.958 17:01:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=38646052-d396-4abd-bebc-1a13af7c5fae 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:24.216 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 38646052-d396-4abd-bebc-1a13af7c5fae lvol 150 00:07:24.475 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=eb058b23-9544-4a3f-82de-ed30456b7572 00:07:24.475 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.475 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.733 [2024-11-20 17:01:42.608132] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.733 [2024-11-20 17:01:42.608177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.733 true 00:07:24.733 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:24.733 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:24.990 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:24.990 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.990 17:01:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eb058b23-9544-4a3f-82de-ed30456b7572 00:07:25.248 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.507 [2024-11-20 17:01:43.338424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2350329 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2350329 /var/tmp/bdevperf.sock 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2350329 ']' 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:25.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.507 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.766 [2024-11-20 17:01:43.581977] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:25.766 [2024-11-20 17:01:43.582022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350329 ] 00:07:25.766 [2024-11-20 17:01:43.656620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.766 [2024-11-20 17:01:43.696529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.766 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.766 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.766 17:01:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:26.333 Nvme0n1 00:07:26.333 17:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:26.591 [ 00:07:26.591 { 00:07:26.591 "name": "Nvme0n1", 00:07:26.591 "aliases": [ 00:07:26.591 "eb058b23-9544-4a3f-82de-ed30456b7572" 00:07:26.591 ], 00:07:26.591 "product_name": "NVMe disk", 00:07:26.591 "block_size": 4096, 00:07:26.591 "num_blocks": 38912, 00:07:26.591 "uuid": "eb058b23-9544-4a3f-82de-ed30456b7572", 00:07:26.591 "numa_id": 1, 00:07:26.591 "assigned_rate_limits": { 00:07:26.591 "rw_ios_per_sec": 0, 00:07:26.591 "rw_mbytes_per_sec": 0, 00:07:26.591 "r_mbytes_per_sec": 0, 00:07:26.591 "w_mbytes_per_sec": 0 00:07:26.591 }, 00:07:26.591 "claimed": false, 00:07:26.591 "zoned": false, 00:07:26.591 "supported_io_types": { 00:07:26.591 "read": true, 00:07:26.591 "write": true, 00:07:26.591 "unmap": true, 00:07:26.591 "flush": true, 00:07:26.591 "reset": true, 00:07:26.591 "nvme_admin": true, 00:07:26.591 "nvme_io": true, 00:07:26.591 "nvme_io_md": false, 00:07:26.591 "write_zeroes": true, 00:07:26.591 "zcopy": false, 00:07:26.591 "get_zone_info": false, 00:07:26.591 "zone_management": false, 00:07:26.591 "zone_append": false, 00:07:26.591 "compare": true, 00:07:26.591 "compare_and_write": true, 00:07:26.591 "abort": true, 00:07:26.591 "seek_hole": false, 00:07:26.591 "seek_data": false, 00:07:26.591 "copy": true, 00:07:26.591 "nvme_iov_md": false 00:07:26.591 }, 00:07:26.591 "memory_domains": [ 00:07:26.591 { 00:07:26.591 "dma_device_id": "system", 00:07:26.591 "dma_device_type": 1 00:07:26.591 } 00:07:26.591 ], 00:07:26.591 "driver_specific": { 00:07:26.591 "nvme": [ 00:07:26.591 { 00:07:26.591 "trid": { 00:07:26.591 "trtype": "TCP", 00:07:26.591 "adrfam": "IPv4", 00:07:26.591 "traddr": "10.0.0.2", 00:07:26.591 "trsvcid": "4420", 00:07:26.591 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:26.591 }, 00:07:26.591 "ctrlr_data": { 00:07:26.591 "cntlid": 1, 00:07:26.591 "vendor_id": "0x8086", 00:07:26.591 "model_number": "SPDK bdev Controller", 00:07:26.591 "serial_number": "SPDK0", 00:07:26.591 "firmware_revision": "25.01", 00:07:26.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.591 "oacs": { 00:07:26.591 "security": 0, 00:07:26.591 "format": 0, 00:07:26.591 "firmware": 0, 00:07:26.591 "ns_manage": 0 00:07:26.591 }, 00:07:26.591 "multi_ctrlr": true, 00:07:26.591 "ana_reporting": false 00:07:26.591 }, 00:07:26.591 "vs": { 00:07:26.591 "nvme_version": "1.3" 00:07:26.591 }, 00:07:26.591 "ns_data": { 00:07:26.591 "id": 1, 00:07:26.591 "can_share": true 00:07:26.591 } 00:07:26.591 } 00:07:26.591 ], 00:07:26.591 "mp_policy": "active_passive" 00:07:26.591 } 00:07:26.591 } 00:07:26.591 ] 00:07:26.591 17:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2350399 00:07:26.591 17:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:26.591 17:01:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:26.591 Running I/O for 10 seconds... 00:07:27.527 Latency(us) 00:07:27.527 [2024-11-20T16:01:45.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.527 Nvme0n1 : 1.00 23259.00 90.86 0.00 0.00 0.00 0.00 0.00 00:07:27.527 [2024-11-20T16:01:45.570Z] =================================================================================================================== 00:07:27.527 [2024-11-20T16:01:45.570Z] Total : 23259.00 90.86 0.00 0.00 0.00 0.00 0.00 00:07:27.527 00:07:28.462 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:28.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.720 Nvme0n1 : 2.00 23410.00 91.45 0.00 0.00 0.00 0.00 0.00 00:07:28.720 [2024-11-20T16:01:46.763Z] =================================================================================================================== 00:07:28.720 [2024-11-20T16:01:46.763Z] Total : 23410.00 91.45 0.00 0.00 0.00 0.00 0.00 00:07:28.720 00:07:28.720 true 00:07:28.720 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:28.720 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.978 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.978 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.978 17:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2350399 00:07:29.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.544 Nvme0n1 : 3.00 23455.67 91.62 0.00 0.00 0.00 0.00 0.00 00:07:29.544 [2024-11-20T16:01:47.587Z] =================================================================================================================== 00:07:29.544 [2024-11-20T16:01:47.587Z] Total : 23455.67 91.62 0.00 0.00 0.00 0.00 0.00 00:07:29.544 00:07:30.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.919 Nvme0n1 : 4.00 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:07:30.919 [2024-11-20T16:01:48.962Z] =================================================================================================================== 00:07:30.919 [2024-11-20T16:01:48.962Z] Total : 23515.50 91.86 0.00 0.00 0.00 0.00 0.00 00:07:30.919 00:07:31.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.851 Nvme0n1 : 5.00 23554.60 92.01 0.00 0.00 0.00 0.00 0.00 00:07:31.851 [2024-11-20T16:01:49.894Z] =================================================================================================================== 00:07:31.851 [2024-11-20T16:01:49.894Z] Total : 23554.60 92.01 0.00 0.00 0.00 0.00 0.00 00:07:31.851 00:07:32.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.786 Nvme0n1 : 6.00 23515.00 91.86 0.00 0.00 0.00 0.00 0.00 00:07:32.786 [2024-11-20T16:01:50.829Z] =================================================================================================================== 00:07:32.786 [2024-11-20T16:01:50.829Z] Total : 23515.00 91.86 0.00 0.00 0.00 0.00 0.00 00:07:32.786 00:07:33.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.721 Nvme0n1 : 7.00 23522.43 91.88 0.00 0.00 0.00 0.00 0.00 00:07:33.721 [2024-11-20T16:01:51.764Z] =================================================================================================================== 00:07:33.721 [2024-11-20T16:01:51.764Z] Total : 23522.43 91.88 0.00 0.00 0.00 0.00 0.00 00:07:33.721 00:07:34.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.656 Nvme0n1 : 8.00 23559.12 92.03 0.00 0.00 0.00 0.00 0.00 00:07:34.656 [2024-11-20T16:01:52.699Z] =================================================================================================================== 00:07:34.656 [2024-11-20T16:01:52.699Z] Total : 23559.12 92.03 0.00 0.00 0.00 0.00 0.00 00:07:34.656 00:07:35.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.590 Nvme0n1 : 9.00 23581.89 92.12 0.00 0.00 0.00 0.00 0.00 00:07:35.590 [2024-11-20T16:01:53.633Z] =================================================================================================================== 00:07:35.590 [2024-11-20T16:01:53.633Z] Total : 23581.89 92.12 0.00 0.00 0.00 0.00 0.00 00:07:35.590 00:07:36.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.525 Nvme0n1 : 10.00 23599.00 92.18 0.00 0.00 0.00 0.00 0.00 00:07:36.525 [2024-11-20T16:01:54.568Z] =================================================================================================================== 00:07:36.525 [2024-11-20T16:01:54.568Z] Total : 23599.00 92.18 0.00 0.00 0.00 0.00 0.00 00:07:36.525 00:07:36.525 00:07:36.525 Latency(us) 00:07:36.525 [2024-11-20T16:01:54.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.525 Nvme0n1 : 10.00 23594.74 92.17 0.00 0.00 5421.68 3120.76 10673.01 00:07:36.525 [2024-11-20T16:01:54.568Z] =================================================================================================================== 00:07:36.525 [2024-11-20T16:01:54.568Z] Total : 23594.74 92.17 0.00 0.00 5421.68 3120.76 10673.01 00:07:36.525 { 00:07:36.525 "results": [ 00:07:36.525 { 00:07:36.525 "job": "Nvme0n1", 00:07:36.525 "core_mask": "0x2", 00:07:36.525 "workload": "randwrite", 00:07:36.525 "status": "finished", 00:07:36.525 "queue_depth": 128, 00:07:36.525 "io_size": 4096, 00:07:36.525 "runtime": 10.003798, 00:07:36.525 "iops": 23594.738718234814, 00:07:36.525 "mibps": 92.16694811810474, 00:07:36.525 "io_failed": 0, 00:07:36.525 "io_timeout": 0, 00:07:36.525 "avg_latency_us": 5421.677278441212, 00:07:36.525 "min_latency_us": 3120.7619047619046, 00:07:36.525 "max_latency_us": 10673.005714285715 00:07:36.525 } 00:07:36.525 ], 00:07:36.525 "core_count": 1 00:07:36.525 } 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2350329 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2350329 ']' 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2350329 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350329 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350329' 00:07:36.784 killing process with pid 2350329 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2350329 00:07:36.784 Received shutdown signal, test time was about 10.000000 seconds 00:07:36.784 00:07:36.784 Latency(us) 00:07:36.784 [2024-11-20T16:01:54.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.784 [2024-11-20T16:01:54.827Z] =================================================================================================================== 00:07:36.784 [2024-11-20T16:01:54.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2350329 00:07:36.784 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.042 17:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.300 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:37.300 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2347167 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2347167 00:07:37.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2347167 Killed "${NVMF_APP[@]}" "$@" 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2352248 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2352248 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2352248 ']' 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.559 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 [2024-11-20 17:01:55.442056] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:37.559 [2024-11-20 17:01:55.442102] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.559 [2024-11-20 17:01:55.520498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.559 [2024-11-20 17:01:55.558491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.559 [2024-11-20 17:01:55.558524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.559 [2024-11-20 17:01:55.558532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.559 [2024-11-20 17:01:55.558538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.559 [2024-11-20 17:01:55.558542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.559 [2024-11-20 17:01:55.559133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.817 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.075 [2024-11-20 17:01:55.877506] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:38.075 [2024-11-20 17:01:55.877599] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:38.075 [2024-11-20 17:01:55.877625] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev eb058b23-9544-4a3f-82de-ed30456b7572 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=eb058b23-9544-4a3f-82de-ed30456b7572 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.075 17:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.075 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb058b23-9544-4a3f-82de-ed30456b7572 -t 2000 00:07:38.334 [ 00:07:38.334 { 00:07:38.334 "name": "eb058b23-9544-4a3f-82de-ed30456b7572", 00:07:38.334 "aliases": [ 00:07:38.334 "lvs/lvol" 00:07:38.334 ], 00:07:38.334 "product_name": "Logical Volume", 00:07:38.334 "block_size": 4096, 00:07:38.334 "num_blocks": 38912, 00:07:38.334 "uuid": "eb058b23-9544-4a3f-82de-ed30456b7572", 00:07:38.334 "assigned_rate_limits": { 00:07:38.334 "rw_ios_per_sec": 0, 00:07:38.334 "rw_mbytes_per_sec": 0, 00:07:38.334 "r_mbytes_per_sec": 0, 00:07:38.334 "w_mbytes_per_sec": 0 00:07:38.334 }, 00:07:38.334 "claimed": false, 00:07:38.334 "zoned": false, 00:07:38.334 "supported_io_types": { 00:07:38.334 "read": true, 00:07:38.334 "write": true, 00:07:38.334 "unmap": true, 00:07:38.334 "flush": false, 00:07:38.334 "reset": true, 00:07:38.334 "nvme_admin": false, 00:07:38.334 "nvme_io": false, 00:07:38.334 "nvme_io_md": false, 00:07:38.334 "write_zeroes": true, 00:07:38.334 "zcopy": false, 00:07:38.334 "get_zone_info": false, 00:07:38.334 "zone_management": false, 00:07:38.334 "zone_append": false, 00:07:38.334 "compare": false, 00:07:38.334 "compare_and_write": false, 00:07:38.334 "abort": false, 00:07:38.334 "seek_hole": true, 00:07:38.334 "seek_data": true, 00:07:38.334 "copy": false, 00:07:38.334 "nvme_iov_md": false 00:07:38.334 }, 00:07:38.334 "driver_specific": { 00:07:38.334 "lvol": { 00:07:38.334 "lvol_store_uuid": "38646052-d396-4abd-bebc-1a13af7c5fae", 00:07:38.334 "base_bdev": "aio_bdev", 00:07:38.334 "thin_provision": false, 00:07:38.334 "num_allocated_clusters": 38, 00:07:38.334 "snapshot": false, 00:07:38.334 "clone": false, 00:07:38.334 "esnap_clone": false 00:07:38.334 } 00:07:38.334 } 00:07:38.334 } 00:07:38.334 ] 00:07:38.334 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:38.334 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:38.334 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:38.593 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:38.593 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:38.593 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.851 [2024-11-20 17:01:56.850323] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.851 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.109 17:01:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:39.109 request: 00:07:39.109 { 00:07:39.109 "uuid": "38646052-d396-4abd-bebc-1a13af7c5fae", 00:07:39.109 "method": "bdev_lvol_get_lvstores", 00:07:39.109 "req_id": 1 00:07:39.109 } 00:07:39.109 Got JSON-RPC error response 00:07:39.109 response: 00:07:39.109 { 00:07:39.109 "code": -19, 00:07:39.109 "message": "No such device" 00:07:39.109 } 00:07:39.109 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:39.109 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.109 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.109 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.109 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.368 aio_bdev 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eb058b23-9544-4a3f-82de-ed30456b7572 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=eb058b23-9544-4a3f-82de-ed30456b7572 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.369 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.626 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eb058b23-9544-4a3f-82de-ed30456b7572 -t 2000 00:07:39.626 [ 00:07:39.626 { 00:07:39.626 "name": "eb058b23-9544-4a3f-82de-ed30456b7572", 00:07:39.626 "aliases": [ 00:07:39.626 "lvs/lvol" 00:07:39.626 ], 00:07:39.626 "product_name": "Logical Volume", 00:07:39.626 "block_size": 4096, 00:07:39.626 "num_blocks": 38912, 00:07:39.626 "uuid": "eb058b23-9544-4a3f-82de-ed30456b7572", 00:07:39.626 "assigned_rate_limits": { 00:07:39.626 "rw_ios_per_sec": 0, 00:07:39.626 "rw_mbytes_per_sec": 0, 00:07:39.626 "r_mbytes_per_sec": 0, 00:07:39.626 "w_mbytes_per_sec": 0 00:07:39.626 }, 00:07:39.626 "claimed": false, 00:07:39.626 "zoned": false, 00:07:39.626 "supported_io_types": { 00:07:39.626 "read": true, 00:07:39.626 "write": true, 00:07:39.626 "unmap": true, 00:07:39.626 "flush": false, 00:07:39.626 "reset": true, 00:07:39.626 "nvme_admin": false, 00:07:39.626 "nvme_io": false, 00:07:39.626 "nvme_io_md": false, 00:07:39.626 "write_zeroes": true, 00:07:39.626 "zcopy": false, 00:07:39.626 "get_zone_info": false, 00:07:39.626 "zone_management": false, 00:07:39.626 "zone_append": false, 00:07:39.626 "compare": false, 00:07:39.626 "compare_and_write": false, 00:07:39.626 "abort": false, 00:07:39.626 "seek_hole": true, 00:07:39.626 "seek_data": true, 00:07:39.626 "copy": false, 00:07:39.626 "nvme_iov_md": false 00:07:39.626 }, 00:07:39.626 "driver_specific": { 00:07:39.626 "lvol": { 00:07:39.626 "lvol_store_uuid": "38646052-d396-4abd-bebc-1a13af7c5fae", 00:07:39.626 "base_bdev": "aio_bdev", 00:07:39.626 "thin_provision": false, 00:07:39.626 "num_allocated_clusters": 38, 00:07:39.626 "snapshot": false, 00:07:39.626 "clone": false, 00:07:39.626 "esnap_clone": false 00:07:39.626 } 00:07:39.626 } 00:07:39.626 } 00:07:39.626 ] 00:07:39.626 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.626 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:39.626 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.884 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.884 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:39.884 17:01:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.143 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.143 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb058b23-9544-4a3f-82de-ed30456b7572 00:07:40.401 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38646052-d396-4abd-bebc-1a13af7c5fae 00:07:40.401 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.659 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.659 00:07:40.659 real 0m17.025s 00:07:40.659 user 0m43.771s 00:07:40.659 sys 0m3.863s 00:07:40.659 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.659 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.659 ************************************ 00:07:40.659 END TEST lvs_grow_dirty 00:07:40.659 ************************************ 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.918 nvmf_trace.0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.918 rmmod nvme_tcp 00:07:40.918 rmmod nvme_fabrics 00:07:40.918 rmmod nvme_keyring 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2352248 ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2352248 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2352248 ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2352248 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352248 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352248' 00:07:40.918 killing process with pid 2352248 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2352248 00:07:40.918 17:01:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2352248 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.177 17:01:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.079 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.079 00:07:43.079 real 0m41.874s 00:07:43.079 user 1m4.697s 00:07:43.079 sys 0m10.177s 00:07:43.079 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.079 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.079 ************************************ 00:07:43.079 END TEST nvmf_lvs_grow 00:07:43.079 ************************************ 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.337 ************************************ 00:07:43.337 START TEST nvmf_bdev_io_wait 00:07:43.337 ************************************ 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.337 * Looking for test storage... 00:07:43.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.337 --rc genhtml_branch_coverage=1 00:07:43.337 --rc genhtml_function_coverage=1 00:07:43.337 --rc genhtml_legend=1 00:07:43.337 --rc geninfo_all_blocks=1 00:07:43.337 --rc geninfo_unexecuted_blocks=1 00:07:43.337 00:07:43.337 ' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.337 --rc genhtml_branch_coverage=1 00:07:43.337 --rc genhtml_function_coverage=1 00:07:43.337 --rc genhtml_legend=1 00:07:43.337 --rc geninfo_all_blocks=1 00:07:43.337 --rc geninfo_unexecuted_blocks=1 00:07:43.337 00:07:43.337 ' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.337 --rc genhtml_branch_coverage=1 00:07:43.337 --rc genhtml_function_coverage=1 00:07:43.337 --rc genhtml_legend=1 00:07:43.337 --rc geninfo_all_blocks=1 00:07:43.337 --rc geninfo_unexecuted_blocks=1 00:07:43.337 00:07:43.337 ' 00:07:43.337 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.337 --rc genhtml_branch_coverage=1 00:07:43.337 --rc genhtml_function_coverage=1 00:07:43.337 --rc genhtml_legend=1 00:07:43.337 --rc geninfo_all_blocks=1 00:07:43.337 --rc geninfo_unexecuted_blocks=1 00:07:43.337 00:07:43.338 ' 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.338 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.596 17:02:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:50.163 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:50.163 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:50.163 Found net devices under 0000:86:00.0: cvl_0_0 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:50.163 Found net devices under 0000:86:00.1: cvl_0_1 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.163 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:07:50.164 00:07:50.164 --- 10.0.0.2 ping statistics --- 00:07:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.164 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:50.164 00:07:50.164 --- 10.0.0.1 ping statistics --- 00:07:50.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.164 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2356534 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2356534 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2356534 ']' 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 [2024-11-20 17:02:07.499561] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:50.164 [2024-11-20 17:02:07.499608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.164 [2024-11-20 17:02:07.563051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.164 [2024-11-20 17:02:07.607787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.164 [2024-11-20 17:02:07.607820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.164 [2024-11-20 17:02:07.607827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.164 [2024-11-20 17:02:07.607834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.164 [2024-11-20 17:02:07.607839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.164 [2024-11-20 17:02:07.612218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.164 [2024-11-20 17:02:07.612246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.164 [2024-11-20 17:02:07.612357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.164 [2024-11-20 17:02:07.612358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 [2024-11-20 17:02:07.791810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 Malloc0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 [2024-11-20 17:02:07.846779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2356562 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2356564 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.164 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.164 { 00:07:50.164 "params": { 00:07:50.164 "name": "Nvme$subsystem", 00:07:50.164 "trtype": "$TEST_TRANSPORT", 00:07:50.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.164 "adrfam": "ipv4", 00:07:50.164 "trsvcid": "$NVMF_PORT", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.165 "hdgst": ${hdgst:-false}, 00:07:50.165 "ddgst": ${ddgst:-false} 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 } 00:07:50.165 EOF 00:07:50.165 )") 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2356566 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.165 { 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme$subsystem", 00:07:50.165 "trtype": "$TEST_TRANSPORT", 00:07:50.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "$NVMF_PORT", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.165 "hdgst": ${hdgst:-false}, 00:07:50.165 "ddgst": ${ddgst:-false} 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 } 00:07:50.165 EOF 00:07:50.165 )") 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2356569 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.165 { 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme$subsystem", 00:07:50.165 "trtype": "$TEST_TRANSPORT", 00:07:50.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "$NVMF_PORT", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.165 "hdgst": ${hdgst:-false}, 00:07:50.165 "ddgst": ${ddgst:-false} 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 } 00:07:50.165 EOF 00:07:50.165 )") 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.165 { 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme$subsystem", 00:07:50.165 "trtype": "$TEST_TRANSPORT", 00:07:50.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "$NVMF_PORT", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.165 "hdgst": ${hdgst:-false}, 00:07:50.165 "ddgst": ${ddgst:-false} 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 } 00:07:50.165 EOF 00:07:50.165 )") 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2356562 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme1", 00:07:50.165 "trtype": "tcp", 00:07:50.165 "traddr": "10.0.0.2", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "4420", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.165 "hdgst": false, 00:07:50.165 "ddgst": false 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 }' 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme1", 00:07:50.165 "trtype": "tcp", 00:07:50.165 "traddr": "10.0.0.2", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "4420", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.165 "hdgst": false, 00:07:50.165 "ddgst": false 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 }' 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme1", 00:07:50.165 "trtype": "tcp", 00:07:50.165 "traddr": "10.0.0.2", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "4420", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.165 "hdgst": false, 00:07:50.165 "ddgst": false 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 }' 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:50.165 17:02:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.165 "params": { 00:07:50.165 "name": "Nvme1", 00:07:50.165 "trtype": "tcp", 00:07:50.165 "traddr": "10.0.0.2", 00:07:50.165 "adrfam": "ipv4", 00:07:50.165 "trsvcid": "4420", 00:07:50.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.165 "hdgst": false, 00:07:50.165 "ddgst": false 00:07:50.165 }, 00:07:50.165 "method": "bdev_nvme_attach_controller" 00:07:50.165 }' 00:07:50.165 [2024-11-20 17:02:07.899124] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:50.165 [2024-11-20 17:02:07.899174] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:50.165 [2024-11-20 17:02:07.899223] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:50.165 [2024-11-20 17:02:07.899264] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:50.165 [2024-11-20 17:02:07.900577] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:50.165 [2024-11-20 17:02:07.900576] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:07:50.165 [2024-11-20 17:02:07.900623] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 17:02:07.900622] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:50.165 --proc-type=auto ] 00:07:50.165 [2024-11-20 17:02:08.099720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.165 [2024-11-20 17:02:08.142161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.165 [2024-11-20 17:02:08.196626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.422 [2024-11-20 17:02:08.248532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.422 [2024-11-20 17:02:08.255926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:50.422 [2024-11-20 17:02:08.290948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:50.422 [2024-11-20 17:02:08.297512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.422 [2024-11-20 17:02:08.337461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:50.422 Running I/O for 1 seconds... 00:07:50.679 Running I/O for 1 seconds... 00:07:50.679 Running I/O for 1 seconds... 00:07:50.679 Running I/O for 1 seconds... 00:07:51.609 244616.00 IOPS, 955.53 MiB/s 00:07:51.609 Latency(us) 00:07:51.609 [2024-11-20T16:02:09.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.609 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:51.609 Nvme1n1 : 1.00 244249.81 954.10 0.00 0.00 521.01 221.38 1490.16 00:07:51.609 [2024-11-20T16:02:09.652Z] =================================================================================================================== 00:07:51.609 [2024-11-20T16:02:09.652Z] Total : 244249.81 954.10 0.00 0.00 521.01 221.38 1490.16 00:07:51.609 8467.00 IOPS, 33.07 MiB/s 00:07:51.609 Latency(us) 00:07:51.609 [2024-11-20T16:02:09.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.609 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:51.609 Nvme1n1 : 1.02 8458.16 33.04 0.00 0.00 14970.42 6584.81 22344.66 00:07:51.609 [2024-11-20T16:02:09.652Z] =================================================================================================================== 00:07:51.609 [2024-11-20T16:02:09.652Z] Total : 8458.16 33.04 0.00 0.00 14970.42 6584.81 22344.66 00:07:51.609 11798.00 IOPS, 46.09 MiB/s 00:07:51.609 Latency(us) 00:07:51.609 [2024-11-20T16:02:09.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.609 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:51.609 Nvme1n1 : 1.01 11840.44 46.25 0.00 0.00 10768.21 6210.32 20846.69 00:07:51.609 [2024-11-20T16:02:09.652Z] =================================================================================================================== 00:07:51.609 [2024-11-20T16:02:09.652Z] Total : 11840.44 46.25 0.00 0.00 10768.21 6210.32 20846.69 00:07:51.609 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2356564 00:07:51.866 8793.00 IOPS, 34.35 MiB/s 00:07:51.866 Latency(us) 00:07:51.866 [2024-11-20T16:02:09.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.866 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:51.866 Nvme1n1 : 1.00 8903.82 34.78 0.00 0.00 14348.44 2340.57 38947.11 00:07:51.866 [2024-11-20T16:02:09.909Z] =================================================================================================================== 00:07:51.866 [2024-11-20T16:02:09.909Z] Total : 8903.82 34.78 0.00 0.00 14348.44 2340.57 38947.11 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2356566 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2356569 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.867 rmmod nvme_tcp 00:07:51.867 rmmod nvme_fabrics 00:07:51.867 rmmod nvme_keyring 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2356534 ']' 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2356534 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2356534 ']' 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2356534 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.867 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356534 00:07:52.125 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.125 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.125 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356534' 00:07:52.125 killing process with pid 2356534 00:07:52.125 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2356534 00:07:52.125 17:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2356534 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.125 17:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.660 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.660 00:07:54.660 real 0m10.959s 00:07:54.660 user 0m16.898s 00:07:54.660 sys 0m6.152s 00:07:54.660 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.660 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.660 ************************************ 00:07:54.660 END TEST nvmf_bdev_io_wait 00:07:54.660 ************************************ 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.661 ************************************ 00:07:54.661 START TEST nvmf_queue_depth 00:07:54.661 ************************************ 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.661 * Looking for test storage... 00:07:54.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.661 --rc genhtml_branch_coverage=1 00:07:54.661 --rc genhtml_function_coverage=1 00:07:54.661 --rc genhtml_legend=1 00:07:54.661 --rc geninfo_all_blocks=1 00:07:54.661 --rc geninfo_unexecuted_blocks=1 00:07:54.661 00:07:54.661 ' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.661 --rc genhtml_branch_coverage=1 00:07:54.661 --rc genhtml_function_coverage=1 00:07:54.661 --rc genhtml_legend=1 00:07:54.661 --rc geninfo_all_blocks=1 00:07:54.661 --rc geninfo_unexecuted_blocks=1 00:07:54.661 00:07:54.661 ' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.661 --rc genhtml_branch_coverage=1 00:07:54.661 --rc genhtml_function_coverage=1 00:07:54.661 --rc genhtml_legend=1 00:07:54.661 --rc geninfo_all_blocks=1 00:07:54.661 --rc geninfo_unexecuted_blocks=1 00:07:54.661 00:07:54.661 ' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.661 --rc genhtml_branch_coverage=1 00:07:54.661 --rc genhtml_function_coverage=1 00:07:54.661 --rc genhtml_legend=1 00:07:54.661 --rc geninfo_all_blocks=1 00:07:54.661 --rc geninfo_unexecuted_blocks=1 00:07:54.661 00:07:54.661 ' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:54.661 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.662 17:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.332 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:01.333 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:01.333 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:01.333 Found net devices under 0000:86:00.0: cvl_0_0 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:01.333 Found net devices under 0000:86:00.1: cvl_0_1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:08:01.333 00:08:01.333 --- 10.0.0.2 ping statistics --- 00:08:01.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.333 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:01.333 00:08:01.333 --- 10.0.0.1 ping statistics --- 00:08:01.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.333 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2360529 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2360529 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2360529 ']' 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.333 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.333 [2024-11-20 17:02:18.481077] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:08:01.334 [2024-11-20 17:02:18.481120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.334 [2024-11-20 17:02:18.561539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.334 [2024-11-20 17:02:18.602043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.334 [2024-11-20 17:02:18.602075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.334 [2024-11-20 17:02:18.602083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.334 [2024-11-20 17:02:18.602089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.334 [2024-11-20 17:02:18.602094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.334 [2024-11-20 17:02:18.602670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 [2024-11-20 17:02:18.738197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 Malloc0 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 [2024-11-20 17:02:18.788443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2360602 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2360602 /var/tmp/bdevperf.sock 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2360602 ']' 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:01.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.334 17:02:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 [2024-11-20 17:02:18.840394] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:08:01.334 [2024-11-20 17:02:18.840433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360602 ] 00:08:01.334 [2024-11-20 17:02:18.915194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.334 [2024-11-20 17:02:18.955581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 NVMe0n1 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.334 17:02:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.334 Running I/O for 10 seconds... 00:08:03.206 11911.00 IOPS, 46.53 MiB/s [2024-11-20T16:02:22.624Z] 11982.00 IOPS, 46.80 MiB/s [2024-11-20T16:02:23.559Z] 12129.67 IOPS, 47.38 MiB/s [2024-11-20T16:02:24.496Z] 12271.75 IOPS, 47.94 MiB/s [2024-11-20T16:02:25.430Z] 12284.80 IOPS, 47.99 MiB/s [2024-11-20T16:02:26.366Z] 12375.50 IOPS, 48.34 MiB/s [2024-11-20T16:02:27.302Z] 12410.43 IOPS, 48.48 MiB/s [2024-11-20T16:02:28.680Z] 12408.38 IOPS, 48.47 MiB/s [2024-11-20T16:02:29.617Z] 12416.67 IOPS, 48.50 MiB/s [2024-11-20T16:02:29.617Z] 12447.10 IOPS, 48.62 MiB/s 00:08:11.574 Latency(us) 00:08:11.574 [2024-11-20T16:02:29.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.574 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:11.574 Verification LBA range: start 0x0 length 0x4000 00:08:11.574 NVMe0n1 : 10.06 12462.90 48.68 0.00 0.00 81868.29 18849.40 53926.77 00:08:11.574 [2024-11-20T16:02:29.617Z] =================================================================================================================== 00:08:11.574 [2024-11-20T16:02:29.617Z] Total : 12462.90 48.68 0.00 0.00 81868.29 18849.40 53926.77 00:08:11.574 { 00:08:11.574 "results": [ 00:08:11.574 { 00:08:11.574 "job": "NVMe0n1", 00:08:11.574 "core_mask": "0x1", 00:08:11.574 "workload": "verify", 00:08:11.574 "status": "finished", 00:08:11.574 "verify_range": { 00:08:11.574 "start": 0, 00:08:11.574 "length": 16384 00:08:11.574 }, 00:08:11.574 "queue_depth": 1024, 00:08:11.574 "io_size": 4096, 00:08:11.574 "runtime": 10.061865, 00:08:11.574 "iops": 12462.898279792067, 00:08:11.574 "mibps": 48.68319640543776, 00:08:11.574 "io_failed": 0, 00:08:11.574 "io_timeout": 0, 00:08:11.574 "avg_latency_us": 81868.28892546518, 00:08:11.574 "min_latency_us": 18849.401904761904, 00:08:11.574 "max_latency_us": 53926.76571428571 00:08:11.574 } 00:08:11.574 ], 00:08:11.574 "core_count": 1 00:08:11.574 } 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2360602 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2360602 ']' 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2360602 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360602 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360602' 00:08:11.574 killing process with pid 2360602 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2360602 00:08:11.574 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.574 00:08:11.574 Latency(us) 00:08:11.574 [2024-11-20T16:02:29.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.574 [2024-11-20T16:02:29.617Z] =================================================================================================================== 00:08:11.574 [2024-11-20T16:02:29.617Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2360602 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.574 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.574 rmmod nvme_tcp 00:08:11.574 rmmod nvme_fabrics 00:08:11.574 rmmod nvme_keyring 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2360529 ']' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2360529 ']' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2360529' 00:08:11.833 killing process with pid 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2360529 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.833 17:02:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.363 17:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.364 00:08:14.364 real 0m19.714s 00:08:14.364 user 0m22.897s 00:08:14.364 sys 0m6.154s 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.364 ************************************ 00:08:14.364 END TEST nvmf_queue_depth 00:08:14.364 ************************************ 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.364 ************************************ 00:08:14.364 START TEST nvmf_target_multipath 00:08:14.364 ************************************ 00:08:14.364 17:02:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.364 * Looking for test storage... 00:08:14.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.364 --rc genhtml_branch_coverage=1 00:08:14.364 --rc genhtml_function_coverage=1 00:08:14.364 --rc genhtml_legend=1 00:08:14.364 --rc geninfo_all_blocks=1 00:08:14.364 --rc geninfo_unexecuted_blocks=1 00:08:14.364 00:08:14.364 ' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.364 --rc genhtml_branch_coverage=1 00:08:14.364 --rc genhtml_function_coverage=1 00:08:14.364 --rc genhtml_legend=1 00:08:14.364 --rc geninfo_all_blocks=1 00:08:14.364 --rc geninfo_unexecuted_blocks=1 00:08:14.364 00:08:14.364 ' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.364 --rc genhtml_branch_coverage=1 00:08:14.364 --rc genhtml_function_coverage=1 00:08:14.364 --rc genhtml_legend=1 00:08:14.364 --rc geninfo_all_blocks=1 00:08:14.364 --rc geninfo_unexecuted_blocks=1 00:08:14.364 00:08:14.364 ' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.364 --rc genhtml_branch_coverage=1 00:08:14.364 --rc genhtml_function_coverage=1 00:08:14.364 --rc genhtml_legend=1 00:08:14.364 --rc geninfo_all_blocks=1 00:08:14.364 --rc geninfo_unexecuted_blocks=1 00:08:14.364 00:08:14.364 ' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.364 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.365 17:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.935 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:20.936 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:20.936 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:20.936 Found net devices under 0000:86:00.0: cvl_0_0 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:20.936 Found net devices under 0000:86:00.1: cvl_0_1 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.936 17:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:08:20.936 00:08:20.936 --- 10.0.0.2 ping statistics --- 00:08:20.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.936 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:08:20.936 00:08:20.936 --- 10.0.0.1 ping statistics --- 00:08:20.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.936 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:20.936 only one NIC for nvmf test 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.936 rmmod nvme_tcp 00:08:20.936 rmmod nvme_fabrics 00:08:20.936 rmmod nvme_keyring 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:20.936 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.937 17:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.316 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.575 00:08:22.575 real 0m8.358s 00:08:22.575 user 0m1.825s 00:08:22.575 sys 0m4.554s 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.575 ************************************ 00:08:22.575 END TEST nvmf_target_multipath 00:08:22.575 ************************************ 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.575 ************************************ 00:08:22.575 START TEST nvmf_zcopy 00:08:22.575 ************************************ 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:22.575 * Looking for test storage... 00:08:22.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.575 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.575 --rc genhtml_branch_coverage=1 00:08:22.575 --rc genhtml_function_coverage=1 00:08:22.575 --rc genhtml_legend=1 00:08:22.575 --rc geninfo_all_blocks=1 00:08:22.575 --rc geninfo_unexecuted_blocks=1 00:08:22.575 00:08:22.576 ' 00:08:22.576 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.576 --rc genhtml_branch_coverage=1 00:08:22.576 --rc genhtml_function_coverage=1 00:08:22.576 --rc genhtml_legend=1 00:08:22.576 --rc geninfo_all_blocks=1 00:08:22.576 --rc geninfo_unexecuted_blocks=1 00:08:22.576 00:08:22.576 ' 00:08:22.576 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.576 --rc genhtml_branch_coverage=1 00:08:22.576 --rc genhtml_function_coverage=1 00:08:22.576 --rc genhtml_legend=1 00:08:22.576 --rc geninfo_all_blocks=1 00:08:22.576 --rc geninfo_unexecuted_blocks=1 00:08:22.576 00:08:22.576 ' 00:08:22.576 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.576 --rc genhtml_branch_coverage=1 00:08:22.576 --rc genhtml_function_coverage=1 00:08:22.576 --rc genhtml_legend=1 00:08:22.576 --rc geninfo_all_blocks=1 00:08:22.576 --rc geninfo_unexecuted_blocks=1 00:08:22.576 00:08:22.576 ' 00:08:22.576 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.835 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.836 17:02:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.410 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.411 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.411 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:08:29.411 00:08:29.411 --- 10.0.0.2 ping statistics --- 00:08:29.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.411 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:29.411 00:08:29.411 --- 10.0.0.1 ping statistics --- 00:08:29.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.411 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.411 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2369498 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2369498 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2369498 ']' 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 [2024-11-20 17:02:46.652852] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:08:29.412 [2024-11-20 17:02:46.652896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.412 [2024-11-20 17:02:46.730185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.412 [2024-11-20 17:02:46.770788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.412 [2024-11-20 17:02:46.770819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.412 [2024-11-20 17:02:46.770826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.412 [2024-11-20 17:02:46.770832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.412 [2024-11-20 17:02:46.770837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.412 [2024-11-20 17:02:46.771385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 [2024-11-20 17:02:46.903410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 [2024-11-20 17:02:46.923607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 malloc0 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.412 { 00:08:29.412 "params": { 00:08:29.412 "name": "Nvme$subsystem", 00:08:29.412 "trtype": "$TEST_TRANSPORT", 00:08:29.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.412 "adrfam": "ipv4", 00:08:29.412 "trsvcid": "$NVMF_PORT", 00:08:29.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.412 "hdgst": ${hdgst:-false}, 00:08:29.412 "ddgst": ${ddgst:-false} 00:08:29.412 }, 00:08:29.412 "method": "bdev_nvme_attach_controller" 00:08:29.412 } 00:08:29.412 EOF 00:08:29.412 )") 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:29.412 17:02:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.412 "params": { 00:08:29.412 "name": "Nvme1", 00:08:29.412 "trtype": "tcp", 00:08:29.412 "traddr": "10.0.0.2", 00:08:29.412 "adrfam": "ipv4", 00:08:29.412 "trsvcid": "4420", 00:08:29.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.412 "hdgst": false, 00:08:29.412 "ddgst": false 00:08:29.412 }, 00:08:29.412 "method": "bdev_nvme_attach_controller" 00:08:29.412 }' 00:08:29.412 [2024-11-20 17:02:47.003853] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:08:29.412 [2024-11-20 17:02:47.003896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369520 ] 00:08:29.412 [2024-11-20 17:02:47.075383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.412 [2024-11-20 17:02:47.116184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.412 Running I/O for 10 seconds... 00:08:31.727 8736.00 IOPS, 68.25 MiB/s [2024-11-20T16:02:50.706Z] 8767.50 IOPS, 68.50 MiB/s [2024-11-20T16:02:51.641Z] 8757.67 IOPS, 68.42 MiB/s [2024-11-20T16:02:52.700Z] 8754.00 IOPS, 68.39 MiB/s [2024-11-20T16:02:53.674Z] 8763.40 IOPS, 68.46 MiB/s [2024-11-20T16:02:54.610Z] 8770.17 IOPS, 68.52 MiB/s [2024-11-20T16:02:55.545Z] 8774.57 IOPS, 68.55 MiB/s [2024-11-20T16:02:56.481Z] 8778.50 IOPS, 68.58 MiB/s [2024-11-20T16:02:57.860Z] 8782.44 IOPS, 68.61 MiB/s [2024-11-20T16:02:57.860Z] 8788.60 IOPS, 68.66 MiB/s 00:08:39.817 Latency(us) 00:08:39.817 [2024-11-20T16:02:57.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.817 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:39.817 Verification LBA range: start 0x0 length 0x1000 00:08:39.817 Nvme1n1 : 10.01 8789.32 68.67 0.00 0.00 14521.51 596.85 22344.66 00:08:39.817 [2024-11-20T16:02:57.860Z] =================================================================================================================== 00:08:39.817 [2024-11-20T16:02:57.860Z] Total : 8789.32 68.67 0.00 0.00 14521.51 596.85 22344.66 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2371353 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.817 { 00:08:39.817 "params": { 00:08:39.817 "name": "Nvme$subsystem", 00:08:39.817 "trtype": "$TEST_TRANSPORT", 00:08:39.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.817 "adrfam": "ipv4", 00:08:39.817 "trsvcid": "$NVMF_PORT", 00:08:39.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.817 "hdgst": ${hdgst:-false}, 00:08:39.817 "ddgst": ${ddgst:-false} 00:08:39.817 }, 00:08:39.817 "method": "bdev_nvme_attach_controller" 00:08:39.817 } 00:08:39.817 EOF 00:08:39.817 )") 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:39.817 [2024-11-20 17:02:57.603954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.603985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:39.817 17:02:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.817 "params": { 00:08:39.817 "name": "Nvme1", 00:08:39.817 "trtype": "tcp", 00:08:39.817 "traddr": "10.0.0.2", 00:08:39.817 "adrfam": "ipv4", 00:08:39.817 "trsvcid": "4420", 00:08:39.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.817 "hdgst": false, 00:08:39.817 "ddgst": false 00:08:39.817 }, 00:08:39.817 "method": "bdev_nvme_attach_controller" 00:08:39.817 }' 00:08:39.817 [2024-11-20 17:02:57.615956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.615968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.627987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.627997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.640016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.640025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.643821] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:08:39.817 [2024-11-20 17:02:57.643864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371353 ] 00:08:39.817 [2024-11-20 17:02:57.652048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.652058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.664079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.664088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.676111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.676120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.688148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.688161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.700179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.700188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.712215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.712224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.718275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.817 [2024-11-20 17:02:57.724246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.724255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.736279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.736294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.748308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.748320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.760342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.760356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.760510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.817 [2024-11-20 17:02:57.772384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.772400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.784411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.784432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.796442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.796455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.808483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.808495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.820517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.820530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.832542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.832553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.844571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.844585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.817 [2024-11-20 17:02:57.856627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.817 [2024-11-20 17:02:57.856645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.868650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.868664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.880681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.880694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.892710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.892720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.904738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.904748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.916772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.916783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.928809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.928823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.940841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.940850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.952874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.952883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.964907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.964917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.976943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.976957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:57.988973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:57.988983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.001007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.001017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.013040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.013049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.025074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.025087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.037107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.037116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.049141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.049150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.061177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.061188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.077 [2024-11-20 17:02:58.110477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.077 [2024-11-20 17:02:58.110498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.121378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.121389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 Running I/O for 5 seconds... 00:08:40.336 [2024-11-20 17:02:58.137998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.138017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.153360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.153379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.167446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.167464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.176357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.176375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.190581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.190599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.204046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.204064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.217960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.217979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.231816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.231835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.246127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.246145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.260176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.260195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.273769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.273787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.287603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.287620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.301660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.301678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.315365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.315383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.329056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.329075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.343227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.343245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.352111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.352129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.336 [2024-11-20 17:02:58.366599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.336 [2024-11-20 17:02:58.366621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.380496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.380514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.394795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.394815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.403912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.403931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.418349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.418367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.432198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.432223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.445877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.445894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.460055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.460074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.471142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.471160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.485564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.485581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.499212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.499246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.513132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.513150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.595 [2024-11-20 17:02:58.526773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.595 [2024-11-20 17:02:58.526791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.535733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.535752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.549798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.549817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.563430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.563449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.577370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.577389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.590983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.591003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.604965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.604983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.613905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.613924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.623171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.623189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.596 [2024-11-20 17:02:58.632523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.596 [2024-11-20 17:02:58.632542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.647185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.647212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.657922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.657940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.672265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.672283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.685966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.685984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.699637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.699656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.713225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.713244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.727145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.727163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.741195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.741221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.751976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.751995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.766030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.766049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.779940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.779960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.793827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.793845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.803085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.803103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.812829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.812847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.827237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.827255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.840930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.840948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.854895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.854913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.868776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.855 [2024-11-20 17:02:58.868794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.855 [2024-11-20 17:02:58.877945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.856 [2024-11-20 17:02:58.877964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.856 [2024-11-20 17:02:58.892080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.856 [2024-11-20 17:02:58.892099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.114 [2024-11-20 17:02:58.905701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.114 [2024-11-20 17:02:58.905720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.114 [2024-11-20 17:02:58.915144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.114 [2024-11-20 17:02:58.915163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.114 [2024-11-20 17:02:58.929406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.929424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:58.943286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.943304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:58.957311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.957329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:58.971271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.971290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:58.985129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.985147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:58.999187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:58.999212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.010311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.010329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.019640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.019658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.033803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.033822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.047946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.047966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.059116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.059135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.072867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.072885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.087075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.087094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.100963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.100982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.114858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.114877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 16754.00 IOPS, 130.89 MiB/s [2024-11-20T16:02:59.158Z] [2024-11-20 17:02:59.128383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.128402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.115 [2024-11-20 17:02:59.142388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.115 [2024-11-20 17:02:59.142406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.155878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.155896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.170289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.170307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.183784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.183802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.197793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.197812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.209002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.209019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.223471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.223489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.237411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.237429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.251441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.251459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.265380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.265398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.274295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.274313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.288344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.288362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.301924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.301943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.315818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.315836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.329685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.329703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.343708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.343730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.357742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.357761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.371322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.371340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.384935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.384953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.398800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.398818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.373 [2024-11-20 17:02:59.412914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.373 [2024-11-20 17:02:59.412932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.426376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.426394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.440250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.440268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.454366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.454384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.463368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.463387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.477624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.477642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.486562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.486580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.500921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.500940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.514645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.514664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.528380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.528398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.541710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.541728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.555837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.555855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.570047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.570066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.583918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.631 [2024-11-20 17:02:59.583937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.631 [2024-11-20 17:02:59.597629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.597651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.632 [2024-11-20 17:02:59.611582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.611600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.632 [2024-11-20 17:02:59.625657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.625676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.632 [2024-11-20 17:02:59.639616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.639634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.632 [2024-11-20 17:02:59.653837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.653856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.632 [2024-11-20 17:02:59.668044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.632 [2024-11-20 17:02:59.668062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.681473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.681490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.695535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.695553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.709639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.709657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.723538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.723557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.732112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.732130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.741380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.741398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.750617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.750635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.765509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.765528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.780683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.780702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.794922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.794942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.808845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.808864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.822742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.822760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.836271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.836290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.850380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.850403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.859261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.859279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.868803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.868821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.878123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.878141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.892407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.892426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.906517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.906535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.915428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.915445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.891 [2024-11-20 17:02:59.924726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.891 [2024-11-20 17:02:59.924746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.939333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.939353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.952695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.952716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.966287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.966307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.975443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.975462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.985519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.985539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:02:59.999494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:02:59.999512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:03:00.014306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:03:00.014325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:03:00.029977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:03:00.029997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:03:00.045028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:03:00.045048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:03:00.060832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:03:00.060852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.151 [2024-11-20 17:03:00.075304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.151 [2024-11-20 17:03:00.075324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.090810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.090833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.105711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.105731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.119844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.119863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 16791.00 IOPS, 131.18 MiB/s [2024-11-20T16:03:00.195Z] [2024-11-20 17:03:00.130310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.130328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.139498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.139517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.154138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.154158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.167930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.167949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.152 [2024-11-20 17:03:00.182394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.152 [2024-11-20 17:03:00.182414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.197518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.197537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.211881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.211899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.224918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.224937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.239593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.239612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.250409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.250428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.264865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.264883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.278728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.278746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.292267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.292285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.306382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.306400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.320235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.320252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.334265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.334283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.348051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.348069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.361751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.361769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.375818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.375837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.411 [2024-11-20 17:03:00.389367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.411 [2024-11-20 17:03:00.389385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.412 [2024-11-20 17:03:00.403601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.412 [2024-11-20 17:03:00.403619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.412 [2024-11-20 17:03:00.417754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.412 [2024-11-20 17:03:00.417772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.412 [2024-11-20 17:03:00.431674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.412 [2024-11-20 17:03:00.431692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.412 [2024-11-20 17:03:00.445680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.412 [2024-11-20 17:03:00.445699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.459634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.459654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.473823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.473841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.487756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.487773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.501624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.501642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.515640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.515659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.529125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.529143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.543490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.543508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.554110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.554128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.568159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.568177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.582360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.670 [2024-11-20 17:03:00.582389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.670 [2024-11-20 17:03:00.593075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.593093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.607747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.607764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.621745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.621764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.635818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.635838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.650045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.650063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.661625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.661643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.676000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.676018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.689922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.689940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.671 [2024-11-20 17:03:00.704626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.671 [2024-11-20 17:03:00.704644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.719786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.719805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.734004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.734022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.747831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.747849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.761855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.761873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.776341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.776360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.791944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.791963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.806027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.806046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.819684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.819703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.833163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.833182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.847003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.847022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.860881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.860903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.874442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.874472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.888178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.888196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.902039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.902057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.915896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.915914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.929181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.929199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.943044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.943062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.951756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.951774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.930 [2024-11-20 17:03:00.966173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.930 [2024-11-20 17:03:00.966191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:00.979904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:00.979922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:00.994498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:00.994516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.010405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.010423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.025064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.025083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.036439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.036458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.050787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.050806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.064445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.064464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.078869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.078887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.094256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.094274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.108221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.108240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.121890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.121913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 16792.67 IOPS, 131.19 MiB/s [2024-11-20T16:03:01.232Z] [2024-11-20 17:03:01.135622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.135641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.149285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.149303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.163300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.163320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.177098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.177117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.191070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.191089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.205546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.205564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.189 [2024-11-20 17:03:01.220661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.189 [2024-11-20 17:03:01.220679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.234898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.234916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.249075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.249094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.262948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.262966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.276996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.277014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.287448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.287466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.301642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.301662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.315594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.315613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.329660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.329681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.343529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.343549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.357358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.357378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.371304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.371323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.385308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.385331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.396660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.396678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.410891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.410910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.424263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.424282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.438198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.438222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.451722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.451741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.448 [2024-11-20 17:03:01.465862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.448 [2024-11-20 17:03:01.465881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.449 [2024-11-20 17:03:01.479814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.449 [2024-11-20 17:03:01.479834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.707 [2024-11-20 17:03:01.493819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.707 [2024-11-20 17:03:01.493838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.707 [2024-11-20 17:03:01.504842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.707 [2024-11-20 17:03:01.504861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.707 [2024-11-20 17:03:01.519200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.707 [2024-11-20 17:03:01.519226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.533255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.533274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.547166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.547185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.560939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.560958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.574824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.574842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.588963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.588981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.602962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.602981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.617373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.617393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.628922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.628940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.643464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.643483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.657587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.657607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.671628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.671647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.685426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.685445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.700051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.700070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.715699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.715718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.729916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.729934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.708 [2024-11-20 17:03:01.743569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.708 [2024-11-20 17:03:01.743588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.757475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.757494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.771296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.771315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.784916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.784935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.798966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.798984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.813034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.813053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.826743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.826761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.840518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.840537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.854368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.854387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.868106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.868125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.881854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.881873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.895524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.895542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.909424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.909441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.923526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.923544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.937618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.937636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.951899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.951918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.963025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.963043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.977243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.977262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:01.991153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:01.991171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.967 [2024-11-20 17:03:02.004695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.967 [2024-11-20 17:03:02.004713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.018784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.018801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.032409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.032427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.046579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.046596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.060212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.060231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.074070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.074088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.088261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.088280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.102335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.102354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.115932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.115950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.129723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.129742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 16787.25 IOPS, 131.15 MiB/s [2024-11-20T16:03:02.270Z] [2024-11-20 17:03:02.143917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.143935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.154198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.154225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.168501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.168519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.182244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.182263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.196148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.196167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.210505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.210523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.221574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.221592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.235550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.235568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.249312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.249330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.227 [2024-11-20 17:03:02.262844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.227 [2024-11-20 17:03:02.262862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.276701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.276720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.291093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.291112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.301570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.301588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.315584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.315602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.329571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.329589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.343485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.343503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.357260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.357278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.371341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.371360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.384879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.384897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.399669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.399686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.414956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.414978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.429150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.429169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.442460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.442479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.456968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.456986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.470916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.470935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.484952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.484970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.498937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.498956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.486 [2024-11-20 17:03:02.512917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.486 [2024-11-20 17:03:02.512935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.526726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.526744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.540263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.540281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.554415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.554434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.568136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.568156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.582102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.582121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.595893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.595912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.609798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.609816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.623660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.623678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.637633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.637652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.651452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.651471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.665325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.665343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.679396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.679422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.693654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.693674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.704949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.704969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.719409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.719428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.733673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.733694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.747726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.747745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.761807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.761825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.745 [2024-11-20 17:03:02.775625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.745 [2024-11-20 17:03:02.775644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.789672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.789691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.803458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.803476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.817344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.817363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.831257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.831275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.844909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.844928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.858597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.858615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.872349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.872369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.886330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.886349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.900130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.004 [2024-11-20 17:03:02.900150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.004 [2024-11-20 17:03:02.914039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.914058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.928153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.928172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.941862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.941885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.955956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.955975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.969924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.969943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.984057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.984076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:02.998066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:02.998085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:03.009242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:03.009260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:03.023642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:03.023661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.005 [2024-11-20 17:03:03.037779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.005 [2024-11-20 17:03:03.037797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.263 [2024-11-20 17:03:03.049256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.263 [2024-11-20 17:03:03.049275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.263 [2024-11-20 17:03:03.063614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.263 [2024-11-20 17:03:03.063633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.263 [2024-11-20 17:03:03.077346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.077364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.091290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.091308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.105048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.105067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.118938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.118956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.132767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.132786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 16780.40 IOPS, 131.10 MiB/s 00:08:45.264 Latency(us) 00:08:45.264 [2024-11-20T16:03:03.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.264 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:45.264 Nvme1n1 : 5.00 16789.73 131.17 0.00 0.00 7617.62 3542.06 17850.76 00:08:45.264 [2024-11-20T16:03:03.307Z] =================================================================================================================== 00:08:45.264 [2024-11-20T16:03:03.307Z] Total : 16789.73 131.17 0.00 0.00 7617.62 3542.06 17850.76 00:08:45.264 [2024-11-20 17:03:03.143016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.143033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.155044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.155058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.167085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.167100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.179116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.179134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.191139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.191152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.203188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.203205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.215204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.215218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.227251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.227265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.239266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.239278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.251298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.251310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.263329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.263338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.275364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.275377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.287391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.287402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.264 [2024-11-20 17:03:03.299422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.264 [2024-11-20 17:03:03.299432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2371353) - No such process 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2371353 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.523 delay0 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.523 17:03:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:45.523 [2024-11-20 17:03:03.447831] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:52.086 Initializing NVMe Controllers 00:08:52.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:52.086 Initialization complete. Launching workers. 00:08:52.086 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 552 00:08:52.086 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 839, failed to submit 33 00:08:52.086 success 644, unsuccessful 195, failed 0 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.086 rmmod nvme_tcp 00:08:52.086 rmmod nvme_fabrics 00:08:52.086 rmmod nvme_keyring 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2369498 ']' 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2369498 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2369498 ']' 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2369498 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2369498 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2369498' 00:08:52.086 killing process with pid 2369498 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2369498 00:08:52.086 17:03:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2369498 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.086 17:03:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.621 00:08:54.621 real 0m31.656s 00:08:54.621 user 0m42.467s 00:08:54.621 sys 0m11.147s 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 ************************************ 00:08:54.621 END TEST nvmf_zcopy 00:08:54.621 ************************************ 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.621 ************************************ 00:08:54.621 START TEST nvmf_nmic 00:08:54.621 ************************************ 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:54.621 * Looking for test storage... 00:08:54.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.621 --rc genhtml_branch_coverage=1 00:08:54.621 --rc genhtml_function_coverage=1 00:08:54.621 --rc genhtml_legend=1 00:08:54.621 --rc geninfo_all_blocks=1 00:08:54.621 --rc geninfo_unexecuted_blocks=1 00:08:54.621 00:08:54.621 ' 00:08:54.621 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.621 --rc genhtml_branch_coverage=1 00:08:54.622 --rc genhtml_function_coverage=1 00:08:54.622 --rc genhtml_legend=1 00:08:54.622 --rc geninfo_all_blocks=1 00:08:54.622 --rc geninfo_unexecuted_blocks=1 00:08:54.622 00:08:54.622 ' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:54.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.622 --rc genhtml_branch_coverage=1 00:08:54.622 --rc genhtml_function_coverage=1 00:08:54.622 --rc genhtml_legend=1 00:08:54.622 --rc geninfo_all_blocks=1 00:08:54.622 --rc geninfo_unexecuted_blocks=1 00:08:54.622 00:08:54.622 ' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:54.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.622 --rc genhtml_branch_coverage=1 00:08:54.622 --rc genhtml_function_coverage=1 00:08:54.622 --rc genhtml_legend=1 00:08:54.622 --rc geninfo_all_blocks=1 00:08:54.622 --rc geninfo_unexecuted_blocks=1 00:08:54.622 00:08:54.622 ' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.622 17:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:01.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:01.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:01.189 Found net devices under 0000:86:00.0: cvl_0_0 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:01.189 Found net devices under 0000:86:00.1: cvl_0_1 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.189 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:09:01.190 00:09:01.190 --- 10.0.0.2 ping statistics --- 00:09:01.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.190 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:01.190 00:09:01.190 --- 10.0.0.1 ping statistics --- 00:09:01.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.190 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2377469 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2377469 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2377469 ']' 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.190 17:03:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.190 [2024-11-20 17:03:18.402359] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:09:01.190 [2024-11-20 17:03:18.402402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.190 [2024-11-20 17:03:18.480717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.190 [2024-11-20 17:03:18.522242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.190 [2024-11-20 17:03:18.522280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.190 [2024-11-20 17:03:18.522286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.190 [2024-11-20 17:03:18.522292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.190 [2024-11-20 17:03:18.522297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.190 [2024-11-20 17:03:18.523895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.190 [2024-11-20 17:03:18.524002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.190 [2024-11-20 17:03:18.524107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.190 [2024-11-20 17:03:18.524108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 [2024-11-20 17:03:19.271908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 Malloc0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 [2024-11-20 17:03:19.330433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:01.447 test case1: single bdev can't be used in multiple subsystems 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 [2024-11-20 17:03:19.354296] bdev.c:8473:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:01.447 [2024-11-20 17:03:19.354316] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:01.447 [2024-11-20 17:03:19.354323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.447 request: 00:09:01.447 { 00:09:01.447 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.447 "namespace": { 00:09:01.447 "bdev_name": "Malloc0", 00:09:01.447 "no_auto_visible": false, 00:09:01.447 "hide_metadata": false 00:09:01.447 }, 00:09:01.447 "method": "nvmf_subsystem_add_ns", 00:09:01.447 "req_id": 1 00:09:01.447 } 00:09:01.447 Got JSON-RPC error response 00:09:01.447 response: 00:09:01.447 { 00:09:01.447 "code": -32602, 00:09:01.447 "message": "Invalid parameters" 00:09:01.447 } 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:01.447 Adding namespace failed - expected result. 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:01.447 test case2: host connect to nvmf target in multiple paths 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.447 [2024-11-20 17:03:19.366421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.447 17:03:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.818 17:03:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:03.753 17:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.753 17:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:03.753 17:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.753 17:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:03.753 17:03:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:06.282 17:03:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:06.282 [global] 00:09:06.282 thread=1 00:09:06.282 invalidate=1 00:09:06.282 rw=write 00:09:06.282 time_based=1 00:09:06.282 runtime=1 00:09:06.282 ioengine=libaio 00:09:06.282 direct=1 00:09:06.282 bs=4096 00:09:06.282 iodepth=1 00:09:06.282 norandommap=0 00:09:06.282 numjobs=1 00:09:06.282 00:09:06.282 verify_dump=1 00:09:06.282 verify_backlog=512 00:09:06.282 verify_state_save=0 00:09:06.282 do_verify=1 00:09:06.282 verify=crc32c-intel 00:09:06.282 [job0] 00:09:06.282 filename=/dev/nvme0n1 00:09:06.282 Could not set queue depth (nvme0n1) 00:09:06.282 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.282 fio-3.35 00:09:06.282 Starting 1 thread 00:09:07.216 00:09:07.216 job0: (groupid=0, jobs=1): err= 0: pid=2378555: Wed Nov 20 17:03:25 2024 00:09:07.216 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:07.216 slat (nsec): min=6686, max=25925, avg=7586.88, stdev=909.29 00:09:07.216 clat (usec): min=150, max=2003, avg=219.73, stdev=44.46 00:09:07.216 lat (usec): min=157, max=2011, avg=227.32, stdev=44.46 00:09:07.216 clat percentiles (usec): 00:09:07.216 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 194], 20.00th=[ 200], 00:09:07.216 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 223], 00:09:07.216 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 269], 00:09:07.216 | 99.00th=[ 285], 99.50th=[ 285], 99.90th=[ 367], 99.95th=[ 375], 00:09:07.216 | 99.99th=[ 2008] 00:09:07.216 write: IOPS=3013, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:09:07.216 slat (nsec): min=9848, max=74191, avg=10882.54, stdev=1483.56 00:09:07.216 clat (usec): min=98, max=273, avg=122.39, stdev=17.04 00:09:07.216 lat (usec): min=108, max=345, avg=133.27, stdev=17.35 00:09:07.216 clat percentiles (usec): 00:09:07.216 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 114], 00:09:07.216 | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 119], 00:09:07.216 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 147], 95.00th=[ 159], 00:09:07.216 | 99.00th=[ 182], 99.50th=[ 223], 99.90th=[ 253], 99.95th=[ 273], 00:09:07.216 | 99.99th=[ 273] 00:09:07.216 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:07.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:07.216 lat (usec) : 100=0.05%, 250=94.03%, 500=5.90% 00:09:07.216 lat (msec) : 4=0.02% 00:09:07.216 cpu : usr=3.60%, sys=4.70%, ctx=5577, majf=0, minf=1 00:09:07.216 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.216 issued rwts: total=2560,3017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.216 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.216 00:09:07.216 Run status group 0 (all jobs): 00:09:07.216 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:07.216 WRITE: bw=11.8MiB/s (12.3MB/s), 11.8MiB/s-11.8MiB/s (12.3MB/s-12.3MB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:09:07.216 00:09:07.216 Disk stats (read/write): 00:09:07.216 nvme0n1: ios=2464/2560, merge=0/0, ticks=560/304, in_queue=864, util=95.79% 00:09:07.216 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.475 rmmod nvme_tcp 00:09:07.475 rmmod nvme_fabrics 00:09:07.475 rmmod nvme_keyring 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2377469 ']' 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2377469 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2377469 ']' 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2377469 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.475 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2377469 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2377469' 00:09:07.734 killing process with pid 2377469 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2377469 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2377469 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.734 17:03:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.269 00:09:10.269 real 0m15.636s 00:09:10.269 user 0m36.003s 00:09:10.269 sys 0m5.456s 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 ************************************ 00:09:10.269 END TEST nvmf_nmic 00:09:10.269 ************************************ 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 ************************************ 00:09:10.269 START TEST nvmf_fio_target 00:09:10.269 ************************************ 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.269 * Looking for test storage... 00:09:10.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.269 17:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.269 --rc genhtml_branch_coverage=1 00:09:10.269 --rc genhtml_function_coverage=1 00:09:10.269 --rc genhtml_legend=1 00:09:10.269 --rc geninfo_all_blocks=1 00:09:10.269 --rc geninfo_unexecuted_blocks=1 00:09:10.269 00:09:10.269 ' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.269 --rc genhtml_branch_coverage=1 00:09:10.269 --rc genhtml_function_coverage=1 00:09:10.269 --rc genhtml_legend=1 00:09:10.269 --rc geninfo_all_blocks=1 00:09:10.269 --rc geninfo_unexecuted_blocks=1 00:09:10.269 00:09:10.269 ' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.269 --rc genhtml_branch_coverage=1 00:09:10.269 --rc genhtml_function_coverage=1 00:09:10.269 --rc genhtml_legend=1 00:09:10.269 --rc geninfo_all_blocks=1 00:09:10.269 --rc geninfo_unexecuted_blocks=1 00:09:10.269 00:09:10.269 ' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.269 --rc genhtml_branch_coverage=1 00:09:10.269 --rc genhtml_function_coverage=1 00:09:10.269 --rc genhtml_legend=1 00:09:10.269 --rc geninfo_all_blocks=1 00:09:10.269 --rc geninfo_unexecuted_blocks=1 00:09:10.269 00:09:10.269 ' 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.269 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.270 17:03:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.835 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.835 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.835 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.835 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.836 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.836 17:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:09:16.836 00:09:16.836 --- 10.0.0.2 ping statistics --- 00:09:16.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.836 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:16.836 00:09:16.836 --- 10.0.0.1 ping statistics --- 00:09:16.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.836 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2382338 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2382338 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2382338 ']' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.836 [2024-11-20 17:03:34.110282] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:09:16.836 [2024-11-20 17:03:34.110324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.836 [2024-11-20 17:03:34.186700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.836 [2024-11-20 17:03:34.228792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.836 [2024-11-20 17:03:34.228828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.836 [2024-11-20 17:03:34.228835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.836 [2024-11-20 17:03:34.228841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.836 [2024-11-20 17:03:34.228846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.836 [2024-11-20 17:03:34.230387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.836 [2024-11-20 17:03:34.230496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.836 [2024-11-20 17:03:34.230602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.836 [2024-11-20 17:03:34.230603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.836 [2024-11-20 17:03:34.533284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:16.836 17:03:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.095 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:17.095 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.353 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:17.353 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.610 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:17.610 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:17.610 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.868 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:17.868 17:03:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.126 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:18.126 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.385 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:18.385 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:18.643 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.643 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.643 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.902 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:18.902 17:03:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.161 17:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.419 [2024-11-20 17:03:37.216906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.419 17:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:19.419 17:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:19.676 17:03:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:21.050 17:03:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:22.952 17:03:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:22.952 [global] 00:09:22.952 thread=1 00:09:22.952 invalidate=1 00:09:22.952 rw=write 00:09:22.952 time_based=1 00:09:22.952 runtime=1 00:09:22.952 ioengine=libaio 00:09:22.952 direct=1 00:09:22.952 bs=4096 00:09:22.952 iodepth=1 00:09:22.952 norandommap=0 00:09:22.952 numjobs=1 00:09:22.952 00:09:22.952 verify_dump=1 00:09:22.952 verify_backlog=512 00:09:22.952 verify_state_save=0 00:09:22.952 do_verify=1 00:09:22.952 verify=crc32c-intel 00:09:22.952 [job0] 00:09:22.952 filename=/dev/nvme0n1 00:09:22.952 [job1] 00:09:22.952 filename=/dev/nvme0n2 00:09:22.952 [job2] 00:09:22.952 filename=/dev/nvme0n3 00:09:22.952 [job3] 00:09:22.952 filename=/dev/nvme0n4 00:09:22.952 Could not set queue depth (nvme0n1) 00:09:22.952 Could not set queue depth (nvme0n2) 00:09:22.952 Could not set queue depth (nvme0n3) 00:09:22.952 Could not set queue depth (nvme0n4) 00:09:23.211 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.211 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.211 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.211 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.211 fio-3.35 00:09:23.211 Starting 4 threads 00:09:24.587 00:09:24.587 job0: (groupid=0, jobs=1): err= 0: pid=2383688: Wed Nov 20 17:03:42 2024 00:09:24.587 read: IOPS=1557, BW=6230KiB/s (6379kB/s)(6236KiB/1001msec) 00:09:24.587 slat (nsec): min=5564, max=43299, avg=7769.00, stdev=1760.12 00:09:24.587 clat (usec): min=151, max=41404, avg=408.28, stdev=2724.09 00:09:24.587 lat (usec): min=158, max=41415, avg=416.05, stdev=2724.89 00:09:24.587 clat percentiles (usec): 00:09:24.587 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 190], 00:09:24.587 | 30.00th=[ 206], 40.00th=[ 223], 50.00th=[ 235], 60.00th=[ 241], 00:09:24.587 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:09:24.587 | 99.00th=[ 302], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41157], 00:09:24.587 | 99.99th=[41157] 00:09:24.587 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:24.587 slat (nsec): min=9036, max=42018, avg=11287.02, stdev=2265.65 00:09:24.587 clat (usec): min=107, max=309, avg=155.65, stdev=27.41 00:09:24.587 lat (usec): min=118, max=339, avg=166.93, stdev=27.85 00:09:24.587 clat percentiles (usec): 00:09:24.587 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 135], 00:09:24.587 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 157], 00:09:24.587 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 204], 00:09:24.587 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 302], 00:09:24.587 | 99.99th=[ 310] 00:09:24.587 bw ( KiB/s): min= 8192, max= 8192, per=45.78%, avg=8192.00, stdev= 0.00, samples=1 00:09:24.587 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:24.587 lat (usec) : 250=89.38%, 500=10.42% 00:09:24.587 lat (msec) : 50=0.19% 00:09:24.587 cpu : usr=2.10%, sys=5.60%, ctx=3607, majf=0, minf=1 00:09:24.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.587 issued rwts: total=1559,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.587 job1: (groupid=0, jobs=1): err= 0: pid=2383690: Wed Nov 20 17:03:42 2024 00:09:24.587 read: IOPS=1017, BW=4070KiB/s (4168kB/s)(4188KiB/1029msec) 00:09:24.587 slat (nsec): min=6425, max=34071, avg=8340.62, stdev=3010.08 00:09:24.587 clat (usec): min=146, max=41991, avg=728.33, stdev=4539.07 00:09:24.587 lat (usec): min=153, max=42014, avg=736.67, stdev=4540.51 00:09:24.587 clat percentiles (usec): 00:09:24.587 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:24.587 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 219], 60.00th=[ 247], 00:09:24.587 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 302], 00:09:24.587 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:24.587 | 99.99th=[42206] 00:09:24.587 write: IOPS=1492, BW=5971KiB/s (6114kB/s)(6144KiB/1029msec); 0 zone resets 00:09:24.587 slat (nsec): min=9225, max=38455, avg=11121.27, stdev=2767.43 00:09:24.587 clat (usec): min=103, max=286, avg=152.27, stdev=33.43 00:09:24.587 lat (usec): min=113, max=297, avg=163.39, stdev=34.59 00:09:24.587 clat percentiles (usec): 00:09:24.588 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 121], 00:09:24.588 | 30.00th=[ 128], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 155], 00:09:24.588 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 215], 00:09:24.588 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 269], 99.95th=[ 285], 00:09:24.588 | 99.99th=[ 285] 00:09:24.588 bw ( KiB/s): min=12288, max=12288, per=68.67%, avg=12288.00, stdev= 0.00, samples=1 00:09:24.588 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:24.588 lat (usec) : 250=85.13%, 500=14.29%, 750=0.08% 00:09:24.588 lat (msec) : 50=0.50% 00:09:24.588 cpu : usr=0.68%, sys=3.02%, ctx=2583, majf=0, minf=1 00:09:24.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 issued rwts: total=1047,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.588 job2: (groupid=0, jobs=1): err= 0: pid=2383691: Wed Nov 20 17:03:42 2024 00:09:24.588 read: IOPS=449, BW=1800KiB/s (1843kB/s)(1816KiB/1009msec) 00:09:24.588 slat (nsec): min=8393, max=37175, avg=10130.29, stdev=3374.47 00:09:24.588 clat (usec): min=211, max=41081, avg=1964.85, stdev=8154.87 00:09:24.588 lat (usec): min=220, max=41105, avg=1974.98, stdev=8157.52 00:09:24.588 clat percentiles (usec): 00:09:24.588 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:09:24.588 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:09:24.588 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 351], 00:09:24.588 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:24.588 | 99.99th=[41157] 00:09:24.588 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:24.588 slat (nsec): min=12417, max=52601, avg=16517.34, stdev=7213.17 00:09:24.588 clat (usec): min=143, max=364, avg=194.29, stdev=30.04 00:09:24.588 lat (usec): min=157, max=403, avg=210.81, stdev=31.67 00:09:24.588 clat percentiles (usec): 00:09:24.588 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 176], 00:09:24.588 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:09:24.588 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 233], 95.00th=[ 258], 00:09:24.588 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 367], 99.95th=[ 367], 00:09:24.588 | 99.99th=[ 367] 00:09:24.588 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.588 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.588 lat (usec) : 250=63.04%, 500=34.99% 00:09:24.588 lat (msec) : 50=1.97% 00:09:24.588 cpu : usr=0.30%, sys=2.38%, ctx=967, majf=0, minf=1 00:09:24.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 issued rwts: total=454,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.588 job3: (groupid=0, jobs=1): err= 0: pid=2383692: Wed Nov 20 17:03:42 2024 00:09:24.588 read: IOPS=156, BW=625KiB/s (640kB/s)(644KiB/1030msec) 00:09:24.588 slat (nsec): min=7037, max=26399, avg=9845.53, stdev=5615.78 00:09:24.588 clat (usec): min=240, max=41465, avg=5785.15, stdev=13793.15 00:09:24.588 lat (usec): min=248, max=41475, avg=5794.99, stdev=13797.93 00:09:24.588 clat percentiles (usec): 00:09:24.588 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 269], 00:09:24.588 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 408], 00:09:24.588 | 70.00th=[ 437], 80.00th=[ 474], 90.00th=[41157], 95.00th=[41157], 00:09:24.588 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:24.588 | 99.99th=[41681] 00:09:24.588 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:24.588 slat (nsec): min=9456, max=37489, avg=10645.83, stdev=1607.34 00:09:24.588 clat (usec): min=143, max=303, avg=174.98, stdev=15.70 00:09:24.588 lat (usec): min=153, max=341, avg=185.63, stdev=16.19 00:09:24.588 clat percentiles (usec): 00:09:24.588 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:24.588 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:24.588 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:09:24.588 | 99.00th=[ 217], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 306], 00:09:24.588 | 99.99th=[ 306] 00:09:24.588 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.588 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.588 lat (usec) : 250=76.08%, 500=19.91%, 750=0.74% 00:09:24.588 lat (msec) : 50=3.27% 00:09:24.588 cpu : usr=0.29%, sys=0.68%, ctx=676, majf=0, minf=1 00:09:24.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.588 issued rwts: total=161,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.588 00:09:24.588 Run status group 0 (all jobs): 00:09:24.588 READ: bw=12.2MiB/s (12.8MB/s), 625KiB/s-6230KiB/s (640kB/s-6379kB/s), io=12.6MiB (13.2MB), run=1001-1030msec 00:09:24.588 WRITE: bw=17.5MiB/s (18.3MB/s), 1988KiB/s-8184KiB/s (2036kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1030msec 00:09:24.588 00:09:24.588 Disk stats (read/write): 00:09:24.588 nvme0n1: ios=1349/1536, merge=0/0, ticks=582/219, in_queue=801, util=86.37% 00:09:24.588 nvme0n2: ios=1047/1536, merge=0/0, ticks=559/228, in_queue=787, util=86.97% 00:09:24.588 nvme0n3: ios=508/512, merge=0/0, ticks=1458/93, in_queue=1551, util=98.43% 00:09:24.588 nvme0n4: ios=180/512, merge=0/0, ticks=1713/91, in_queue=1804, util=98.42% 00:09:24.588 17:03:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:24.588 [global] 00:09:24.588 thread=1 00:09:24.588 invalidate=1 00:09:24.588 rw=randwrite 00:09:24.588 time_based=1 00:09:24.588 runtime=1 00:09:24.588 ioengine=libaio 00:09:24.588 direct=1 00:09:24.588 bs=4096 00:09:24.588 iodepth=1 00:09:24.588 norandommap=0 00:09:24.588 numjobs=1 00:09:24.588 00:09:24.588 verify_dump=1 00:09:24.588 verify_backlog=512 00:09:24.588 verify_state_save=0 00:09:24.588 do_verify=1 00:09:24.588 verify=crc32c-intel 00:09:24.588 [job0] 00:09:24.588 filename=/dev/nvme0n1 00:09:24.588 [job1] 00:09:24.588 filename=/dev/nvme0n2 00:09:24.588 [job2] 00:09:24.588 filename=/dev/nvme0n3 00:09:24.588 [job3] 00:09:24.588 filename=/dev/nvme0n4 00:09:24.588 Could not set queue depth (nvme0n1) 00:09:24.588 Could not set queue depth (nvme0n2) 00:09:24.588 Could not set queue depth (nvme0n3) 00:09:24.588 Could not set queue depth (nvme0n4) 00:09:24.847 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.847 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.847 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.847 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.847 fio-3.35 00:09:24.847 Starting 4 threads 00:09:26.220 00:09:26.220 job0: (groupid=0, jobs=1): err= 0: pid=2384058: Wed Nov 20 17:03:43 2024 00:09:26.220 read: IOPS=378, BW=1514KiB/s (1551kB/s)(1516KiB/1001msec) 00:09:26.220 slat (nsec): min=3750, max=31527, avg=10702.04, stdev=3885.93 00:09:26.220 clat (usec): min=176, max=42043, avg=2346.10, stdev=9078.88 00:09:26.220 lat (usec): min=185, max=42065, avg=2356.80, stdev=9081.18 00:09:26.220 clat percentiles (usec): 00:09:26.220 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 198], 00:09:26.220 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:09:26.220 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[40633], 00:09:26.220 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:26.220 | 99.99th=[42206] 00:09:26.220 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:26.220 slat (nsec): min=3862, max=56415, avg=6642.74, stdev=5435.92 00:09:26.220 clat (usec): min=122, max=306, avg=197.63, stdev=27.41 00:09:26.220 lat (usec): min=126, max=344, avg=204.28, stdev=27.60 00:09:26.220 clat percentiles (usec): 00:09:26.220 | 1.00th=[ 143], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 176], 00:09:26.220 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:09:26.220 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 241], 00:09:26.220 | 99.00th=[ 262], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 306], 00:09:26.220 | 99.99th=[ 306] 00:09:26.220 bw ( KiB/s): min= 4096, max= 4096, per=17.28%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.220 lat (usec) : 250=95.40%, 500=2.13%, 750=0.11% 00:09:26.220 lat (msec) : 2=0.11%, 50=2.24% 00:09:26.220 cpu : usr=0.70%, sys=0.90%, ctx=892, majf=0, minf=1 00:09:26.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.220 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.220 job1: (groupid=0, jobs=1): err= 0: pid=2384059: Wed Nov 20 17:03:43 2024 00:09:26.220 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:26.220 slat (nsec): min=6680, max=31252, avg=8528.81, stdev=2179.46 00:09:26.220 clat (usec): min=143, max=533, avg=213.18, stdev=48.70 00:09:26.220 lat (usec): min=151, max=541, avg=221.71, stdev=49.46 00:09:26.220 clat percentiles (usec): 00:09:26.220 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:09:26.220 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 208], 60.00th=[ 223], 00:09:26.220 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 310], 00:09:26.220 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 490], 99.95th=[ 502], 00:09:26.220 | 99.99th=[ 537] 00:09:26.220 write: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:09:26.220 slat (nsec): min=10044, max=50283, avg=11866.57, stdev=2674.80 00:09:26.220 clat (usec): min=100, max=361, avg=146.95, stdev=33.37 00:09:26.220 lat (usec): min=111, max=399, avg=158.82, stdev=34.03 00:09:26.220 clat percentiles (usec): 00:09:26.220 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:09:26.220 | 30.00th=[ 124], 40.00th=[ 131], 50.00th=[ 139], 60.00th=[ 147], 00:09:26.220 | 70.00th=[ 157], 80.00th=[ 169], 90.00th=[ 196], 95.00th=[ 221], 00:09:26.220 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 314], 99.95th=[ 351], 00:09:26.220 | 99.99th=[ 363] 00:09:26.220 bw ( KiB/s): min=12288, max=12288, per=51.83%, avg=12288.00, stdev= 0.00, samples=1 00:09:26.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:26.220 lat (usec) : 250=90.31%, 500=9.66%, 750=0.04% 00:09:26.220 cpu : usr=3.20%, sys=6.00%, ctx=5179, majf=0, minf=1 00:09:26.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.220 issued rwts: total=2560,2618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.220 job2: (groupid=0, jobs=1): err= 0: pid=2384060: Wed Nov 20 17:03:43 2024 00:09:26.220 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:26.220 slat (nsec): min=7247, max=39065, avg=9258.55, stdev=2781.96 00:09:26.220 clat (usec): min=194, max=41948, avg=720.88, stdev=4204.32 00:09:26.220 lat (usec): min=202, max=41987, avg=730.14, stdev=4205.31 00:09:26.221 clat percentiles (usec): 00:09:26.221 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 227], 00:09:26.221 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 262], 00:09:26.221 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 457], 95.00th=[ 490], 00:09:26.221 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:26.221 | 99.99th=[42206] 00:09:26.221 write: IOPS=1265, BW=5063KiB/s (5184kB/s)(5068KiB/1001msec); 0 zone resets 00:09:26.221 slat (nsec): min=5042, max=44877, avg=12531.31, stdev=2758.42 00:09:26.221 clat (usec): min=129, max=832, avg=181.11, stdev=40.59 00:09:26.221 lat (usec): min=141, max=845, avg=193.64, stdev=40.77 00:09:26.221 clat percentiles (usec): 00:09:26.221 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:09:26.221 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 184], 00:09:26.221 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 233], 00:09:26.221 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[ 660], 99.95th=[ 832], 00:09:26.221 | 99.99th=[ 832] 00:09:26.221 bw ( KiB/s): min= 4096, max= 4096, per=17.28%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.221 lat (usec) : 250=77.52%, 500=20.91%, 750=1.05%, 1000=0.04% 00:09:26.221 lat (msec) : 50=0.48% 00:09:26.221 cpu : usr=2.40%, sys=3.40%, ctx=2292, majf=0, minf=1 00:09:26.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.221 issued rwts: total=1024,1267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.221 job3: (groupid=0, jobs=1): err= 0: pid=2384061: Wed Nov 20 17:03:43 2024 00:09:26.221 read: IOPS=1471, BW=5886KiB/s (6027kB/s)(5892KiB/1001msec) 00:09:26.221 slat (nsec): min=6717, max=27457, avg=8492.93, stdev=2239.70 00:09:26.221 clat (usec): min=180, max=41919, avg=477.87, stdev=3010.44 00:09:26.221 lat (usec): min=187, max=41942, avg=486.36, stdev=3011.59 00:09:26.221 clat percentiles (usec): 00:09:26.221 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:09:26.221 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:09:26.221 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 486], 00:09:26.221 | 99.00th=[ 523], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:09:26.221 | 99.99th=[41681] 00:09:26.221 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:26.221 slat (nsec): min=6409, max=87359, avg=11642.52, stdev=3042.15 00:09:26.221 clat (usec): min=106, max=324, avg=167.76, stdev=24.00 00:09:26.221 lat (usec): min=127, max=338, avg=179.40, stdev=24.91 00:09:26.221 clat percentiles (usec): 00:09:26.221 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:09:26.221 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:26.221 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 208], 00:09:26.221 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 326], 00:09:26.221 | 99.99th=[ 326] 00:09:26.221 bw ( KiB/s): min= 4096, max= 4096, per=17.28%, avg=4096.00, stdev= 0.00, samples=1 00:09:26.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:26.221 lat (usec) : 250=84.11%, 500=14.59%, 750=0.90% 00:09:26.221 lat (msec) : 2=0.13%, 50=0.27% 00:09:26.221 cpu : usr=2.40%, sys=3.30%, ctx=3012, majf=0, minf=1 00:09:26.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.221 issued rwts: total=1473,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.221 00:09:26.221 Run status group 0 (all jobs): 00:09:26.221 READ: bw=21.2MiB/s (22.2MB/s), 1514KiB/s-9.99MiB/s (1551kB/s-10.5MB/s), io=21.2MiB (22.3MB), run=1001-1001msec 00:09:26.221 WRITE: bw=23.2MiB/s (24.3MB/s), 2046KiB/s-10.2MiB/s (2095kB/s-10.7MB/s), io=23.2MiB (24.3MB), run=1001-1001msec 00:09:26.221 00:09:26.221 Disk stats (read/write): 00:09:26.221 nvme0n1: ios=424/512, merge=0/0, ticks=760/94, in_queue=854, util=86.07% 00:09:26.221 nvme0n2: ios=2097/2450, merge=0/0, ticks=894/347, in_queue=1241, util=90.15% 00:09:26.221 nvme0n3: ios=800/1024, merge=0/0, ticks=1529/180, in_queue=1709, util=93.56% 00:09:26.221 nvme0n4: ios=1047/1265, merge=0/0, ticks=1498/202, in_queue=1700, util=94.24% 00:09:26.221 17:03:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:26.221 [global] 00:09:26.221 thread=1 00:09:26.221 invalidate=1 00:09:26.221 rw=write 00:09:26.221 time_based=1 00:09:26.221 runtime=1 00:09:26.221 ioengine=libaio 00:09:26.221 direct=1 00:09:26.221 bs=4096 00:09:26.221 iodepth=128 00:09:26.221 norandommap=0 00:09:26.221 numjobs=1 00:09:26.221 00:09:26.221 verify_dump=1 00:09:26.221 verify_backlog=512 00:09:26.221 verify_state_save=0 00:09:26.221 do_verify=1 00:09:26.221 verify=crc32c-intel 00:09:26.221 [job0] 00:09:26.221 filename=/dev/nvme0n1 00:09:26.221 [job1] 00:09:26.221 filename=/dev/nvme0n2 00:09:26.221 [job2] 00:09:26.221 filename=/dev/nvme0n3 00:09:26.221 [job3] 00:09:26.221 filename=/dev/nvme0n4 00:09:26.221 Could not set queue depth (nvme0n1) 00:09:26.221 Could not set queue depth (nvme0n2) 00:09:26.221 Could not set queue depth (nvme0n3) 00:09:26.221 Could not set queue depth (nvme0n4) 00:09:26.479 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.479 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.479 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.479 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.479 fio-3.35 00:09:26.479 Starting 4 threads 00:09:27.854 00:09:27.854 job0: (groupid=0, jobs=1): err= 0: pid=2384440: Wed Nov 20 17:03:45 2024 00:09:27.854 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:27.854 slat (nsec): min=1322, max=15740k, avg=98750.24, stdev=625086.60 00:09:27.854 clat (usec): min=4389, max=37002, avg=12961.57, stdev=4982.24 00:09:27.854 lat (usec): min=4398, max=37009, avg=13060.32, stdev=5018.32 00:09:27.854 clat percentiles (usec): 00:09:27.854 | 1.00th=[ 6259], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10028], 00:09:27.854 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:09:27.854 | 70.00th=[13042], 80.00th=[13829], 90.00th=[21627], 95.00th=[24773], 00:09:27.854 | 99.00th=[32900], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:09:27.854 | 99.99th=[36963] 00:09:27.854 write: IOPS=4708, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec); 0 zone resets 00:09:27.854 slat (usec): min=2, max=24389, avg=107.37, stdev=845.81 00:09:27.854 clat (usec): min=2929, max=64582, avg=14248.42, stdev=7516.46 00:09:27.854 lat (usec): min=2935, max=64611, avg=14355.79, stdev=7590.89 00:09:27.854 clat percentiles (usec): 00:09:27.854 | 1.00th=[ 5604], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10159], 00:09:27.854 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:09:27.854 | 70.00th=[13566], 80.00th=[16909], 90.00th=[21627], 95.00th=[30802], 00:09:27.854 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[54264], 00:09:27.854 | 99.99th=[64750] 00:09:27.854 bw ( KiB/s): min=15432, max=21432, per=25.24%, avg=18432.00, stdev=4242.64, samples=2 00:09:27.854 iops : min= 3858, max= 5358, avg=4608.00, stdev=1060.66, samples=2 00:09:27.854 lat (msec) : 4=0.17%, 10=19.22%, 20=69.41%, 50=10.57%, 100=0.63% 00:09:27.854 cpu : usr=3.89%, sys=5.98%, ctx=347, majf=0, minf=2 00:09:27.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:27.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.854 issued rwts: total=4608,4727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.854 job1: (groupid=0, jobs=1): err= 0: pid=2384441: Wed Nov 20 17:03:45 2024 00:09:27.854 read: IOPS=5494, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1003msec) 00:09:27.854 slat (nsec): min=1150, max=10519k, avg=87474.17, stdev=577646.30 00:09:27.854 clat (usec): min=2312, max=33006, avg=11284.54, stdev=3353.76 00:09:27.854 lat (usec): min=2317, max=33009, avg=11372.02, stdev=3374.13 00:09:27.854 clat percentiles (usec): 00:09:27.854 | 1.00th=[ 3195], 5.00th=[ 6915], 10.00th=[ 8455], 20.00th=[ 9372], 00:09:27.854 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10683], 60.00th=[11338], 00:09:27.854 | 70.00th=[11600], 80.00th=[12911], 90.00th=[15533], 95.00th=[18220], 00:09:27.854 | 99.00th=[21365], 99.50th=[23200], 99.90th=[32900], 99.95th=[32900], 00:09:27.854 | 99.99th=[32900] 00:09:27.854 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:27.854 slat (usec): min=2, max=12237, avg=82.84, stdev=500.20 00:09:27.854 clat (usec): min=1866, max=35538, avg=11468.77, stdev=4837.08 00:09:27.854 lat (usec): min=1878, max=35546, avg=11551.61, stdev=4881.76 00:09:27.854 clat percentiles (usec): 00:09:27.854 | 1.00th=[ 2900], 5.00th=[ 6259], 10.00th=[ 7898], 20.00th=[ 9372], 00:09:27.854 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:09:27.854 | 70.00th=[11076], 80.00th=[11731], 90.00th=[19268], 95.00th=[23200], 00:09:27.854 | 99.00th=[28181], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:09:27.854 | 99.99th=[35390] 00:09:27.854 bw ( KiB/s): min=20480, max=24576, per=30.84%, avg=22528.00, stdev=2896.31, samples=2 00:09:27.854 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:27.854 lat (msec) : 2=0.10%, 4=1.81%, 10=40.66%, 20=52.01%, 50=5.41% 00:09:27.854 cpu : usr=3.09%, sys=6.29%, ctx=579, majf=0, minf=1 00:09:27.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:27.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.854 issued rwts: total=5511,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.854 job2: (groupid=0, jobs=1): err= 0: pid=2384442: Wed Nov 20 17:03:45 2024 00:09:27.854 read: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(15.4MiB/1042msec) 00:09:27.854 slat (nsec): min=1404, max=16234k, avg=123749.21, stdev=676868.31 00:09:27.854 clat (usec): min=8541, max=54436, avg=16704.34, stdev=7636.26 00:09:27.854 lat (usec): min=8553, max=54437, avg=16828.09, stdev=7666.16 00:09:27.854 clat percentiles (usec): 00:09:27.854 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11731], 20.00th=[12780], 00:09:27.855 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14615], 60.00th=[15401], 00:09:27.855 | 70.00th=[16712], 80.00th=[18482], 90.00th=[22414], 95.00th=[27395], 00:09:27.855 | 99.00th=[51119], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:09:27.855 | 99.99th=[54264] 00:09:27.855 write: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1042msec); 0 zone resets 00:09:27.855 slat (nsec): min=1954, max=12717k, avg=116879.13, stdev=687801.27 00:09:27.855 clat (usec): min=3546, max=52853, avg=16112.73, stdev=7404.49 00:09:27.855 lat (usec): min=3555, max=52862, avg=16229.61, stdev=7446.15 00:09:27.855 clat percentiles (usec): 00:09:27.855 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11338], 20.00th=[12518], 00:09:27.855 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13960], 00:09:27.855 | 70.00th=[15533], 80.00th=[17433], 90.00th=[23462], 95.00th=[33817], 00:09:27.855 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:09:27.855 | 99.99th=[52691] 00:09:27.855 bw ( KiB/s): min=13768, max=19000, per=22.43%, avg=16384.00, stdev=3699.58, samples=2 00:09:27.855 iops : min= 3442, max= 4750, avg=4096.00, stdev=924.90, samples=2 00:09:27.855 lat (msec) : 4=0.07%, 10=2.71%, 20=83.08%, 50=12.84%, 100=1.29% 00:09:27.855 cpu : usr=3.65%, sys=4.23%, ctx=402, majf=0, minf=1 00:09:27.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:27.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.855 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.855 job3: (groupid=0, jobs=1): err= 0: pid=2384443: Wed Nov 20 17:03:45 2024 00:09:27.855 read: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(17.9MiB/1044msec) 00:09:27.855 slat (nsec): min=1379, max=43183k, avg=123845.66, stdev=1008796.92 00:09:27.855 clat (usec): min=4291, max=55847, avg=16917.42, stdev=10293.72 00:09:27.855 lat (usec): min=4302, max=55902, avg=17041.26, stdev=10319.50 00:09:27.855 clat percentiles (usec): 00:09:27.855 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11731], 00:09:27.855 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13304], 60.00th=[13960], 00:09:27.855 | 70.00th=[15139], 80.00th=[17957], 90.00th=[28967], 95.00th=[43779], 00:09:27.855 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:09:27.855 | 99.99th=[55837] 00:09:27.855 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:09:27.855 slat (usec): min=2, max=10762, avg=88.31, stdev=470.07 00:09:27.855 clat (usec): min=3112, max=28729, avg=11828.26, stdev=1855.81 00:09:27.855 lat (usec): min=3123, max=28734, avg=11916.57, stdev=1871.03 00:09:27.855 clat percentiles (usec): 00:09:27.855 | 1.00th=[ 4948], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10945], 00:09:27.855 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12387], 00:09:27.855 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13698], 95.00th=[13960], 00:09:27.855 | 99.00th=[15795], 99.50th=[17957], 99.90th=[22414], 99.95th=[22676], 00:09:27.855 | 99.99th=[28705] 00:09:27.855 bw ( KiB/s): min=16384, max=20480, per=25.24%, avg=18432.00, stdev=2896.31, samples=2 00:09:27.855 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:27.855 lat (msec) : 4=0.22%, 10=7.14%, 20=84.69%, 50=5.90%, 100=2.05% 00:09:27.855 cpu : usr=2.88%, sys=5.08%, ctx=540, majf=0, minf=1 00:09:27.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:27.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.855 issued rwts: total=4594,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.855 00:09:27.855 Run status group 0 (all jobs): 00:09:27.855 READ: bw=69.8MiB/s (73.2MB/s), 14.8MiB/s-21.5MiB/s (15.5MB/s-22.5MB/s), io=72.9MiB (76.5MB), run=1003-1044msec 00:09:27.855 WRITE: bw=71.3MiB/s (74.8MB/s), 15.4MiB/s-21.9MiB/s (16.1MB/s-23.0MB/s), io=74.5MiB (78.1MB), run=1003-1044msec 00:09:27.855 00:09:27.855 Disk stats (read/write): 00:09:27.855 nvme0n1: ios=3634/3816, merge=0/0, ticks=18417/22536, in_queue=40953, util=85.27% 00:09:27.855 nvme0n2: ios=4493/4608, merge=0/0, ticks=30454/27844, in_queue=58298, util=96.91% 00:09:27.855 nvme0n3: ios=3101/3402, merge=0/0, ticks=27408/33592, in_queue=61000, util=96.72% 00:09:27.855 nvme0n4: ios=3953/4096, merge=0/0, ticks=30213/24959, in_queue=55172, util=96.48% 00:09:27.855 17:03:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:27.855 [global] 00:09:27.855 thread=1 00:09:27.855 invalidate=1 00:09:27.855 rw=randwrite 00:09:27.855 time_based=1 00:09:27.855 runtime=1 00:09:27.855 ioengine=libaio 00:09:27.855 direct=1 00:09:27.855 bs=4096 00:09:27.855 iodepth=128 00:09:27.855 norandommap=0 00:09:27.855 numjobs=1 00:09:27.855 00:09:27.855 verify_dump=1 00:09:27.855 verify_backlog=512 00:09:27.855 verify_state_save=0 00:09:27.855 do_verify=1 00:09:27.855 verify=crc32c-intel 00:09:27.855 [job0] 00:09:27.855 filename=/dev/nvme0n1 00:09:27.855 [job1] 00:09:27.855 filename=/dev/nvme0n2 00:09:27.855 [job2] 00:09:27.855 filename=/dev/nvme0n3 00:09:27.855 [job3] 00:09:27.855 filename=/dev/nvme0n4 00:09:27.855 Could not set queue depth (nvme0n1) 00:09:27.855 Could not set queue depth (nvme0n2) 00:09:27.855 Could not set queue depth (nvme0n3) 00:09:27.855 Could not set queue depth (nvme0n4) 00:09:28.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:28.113 fio-3.35 00:09:28.113 Starting 4 threads 00:09:29.490 00:09:29.490 job0: (groupid=0, jobs=1): err= 0: pid=2384809: Wed Nov 20 17:03:47 2024 00:09:29.490 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:29.490 slat (nsec): min=1183, max=22369k, avg=122951.43, stdev=991042.05 00:09:29.490 clat (usec): min=5591, max=63816, avg=15712.35, stdev=9012.36 00:09:29.490 lat (usec): min=5597, max=63833, avg=15835.30, stdev=9099.49 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 5604], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[11076], 00:09:29.490 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[13173], 00:09:29.490 | 70.00th=[13698], 80.00th=[15795], 90.00th=[29492], 95.00th=[38536], 00:09:29.490 | 99.00th=[46400], 99.50th=[46400], 99.90th=[49546], 99.95th=[52167], 00:09:29.490 | 99.99th=[63701] 00:09:29.490 write: IOPS=4464, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1005msec); 0 zone resets 00:09:29.490 slat (usec): min=2, max=18152, avg=102.63, stdev=653.48 00:09:29.490 clat (usec): min=4217, max=48587, avg=13878.41, stdev=5678.69 00:09:29.490 lat (usec): min=4925, max=48644, avg=13981.03, stdev=5734.89 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 5604], 5.00th=[ 6652], 10.00th=[ 8848], 20.00th=[10945], 00:09:29.490 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:09:29.490 | 70.00th=[14615], 80.00th=[17957], 90.00th=[20055], 95.00th=[28705], 00:09:29.490 | 99.00th=[33817], 99.50th=[33817], 99.90th=[36439], 99.95th=[37487], 00:09:29.490 | 99.99th=[48497] 00:09:29.490 bw ( KiB/s): min=15096, max=19784, per=22.69%, avg=17440.00, stdev=3314.92, samples=2 00:09:29.490 iops : min= 3774, max= 4946, avg=4360.00, stdev=828.73, samples=2 00:09:29.490 lat (msec) : 10=13.83%, 20=73.04%, 50=13.08%, 100=0.05% 00:09:29.490 cpu : usr=2.59%, sys=5.58%, ctx=408, majf=0, minf=1 00:09:29.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:29.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.490 issued rwts: total=4096,4487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.490 job1: (groupid=0, jobs=1): err= 0: pid=2384813: Wed Nov 20 17:03:47 2024 00:09:29.490 read: IOPS=4326, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec) 00:09:29.490 slat (nsec): min=1392, max=16688k, avg=111431.63, stdev=808455.70 00:09:29.490 clat (usec): min=1454, max=60356, avg=13534.74, stdev=6569.03 00:09:29.490 lat (usec): min=3743, max=60363, avg=13646.17, stdev=6636.90 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 4752], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9372], 00:09:29.490 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11863], 60.00th=[12649], 00:09:29.490 | 70.00th=[13960], 80.00th=[15533], 90.00th=[20841], 95.00th=[25035], 00:09:29.490 | 99.00th=[42730], 99.50th=[53740], 99.90th=[60556], 99.95th=[60556], 00:09:29.490 | 99.99th=[60556] 00:09:29.490 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:29.490 slat (usec): min=2, max=16888, avg=102.06, stdev=575.38 00:09:29.490 clat (usec): min=3020, max=61050, avg=14851.72, stdev=10563.36 00:09:29.490 lat (usec): min=3030, max=61064, avg=14953.78, stdev=10632.95 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 3884], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 8848], 00:09:29.490 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11338], 60.00th=[11469], 00:09:29.490 | 70.00th=[12125], 80.00th=[19530], 90.00th=[31065], 95.00th=[40633], 00:09:29.490 | 99.00th=[52691], 99.50th=[56361], 99.90th=[61080], 99.95th=[61080], 00:09:29.490 | 99.99th=[61080] 00:09:29.490 bw ( KiB/s): min=17232, max=19632, per=23.98%, avg=18432.00, stdev=1697.06, samples=2 00:09:29.490 iops : min= 4308, max= 4908, avg=4608.00, stdev=424.26, samples=2 00:09:29.490 lat (msec) : 2=0.01%, 4=0.74%, 10=26.88%, 20=59.61%, 50=11.65% 00:09:29.490 lat (msec) : 100=1.12% 00:09:29.490 cpu : usr=3.19%, sys=5.38%, ctx=533, majf=0, minf=2 00:09:29.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:29.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.490 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.490 job2: (groupid=0, jobs=1): err= 0: pid=2384821: Wed Nov 20 17:03:47 2024 00:09:29.490 read: IOPS=4404, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1003msec) 00:09:29.490 slat (nsec): min=1400, max=6539.0k, avg=97382.79, stdev=475460.95 00:09:29.490 clat (usec): min=833, max=28838, avg=12357.05, stdev=2563.99 00:09:29.490 lat (usec): min=3614, max=28848, avg=12454.43, stdev=2551.77 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11207], 00:09:29.490 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:09:29.490 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[14484], 00:09:29.490 | 99.00th=[25822], 99.50th=[26084], 99.90th=[28705], 99.95th=[28705], 00:09:29.490 | 99.99th=[28967] 00:09:29.490 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:29.490 slat (usec): min=2, max=10995, avg=118.17, stdev=578.94 00:09:29.490 clat (usec): min=8089, max=65021, avg=15414.36, stdev=10725.17 00:09:29.490 lat (usec): min=8432, max=65029, avg=15532.53, stdev=10791.52 00:09:29.490 clat percentiles (usec): 00:09:29.490 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11207], 00:09:29.490 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[12256], 00:09:29.490 | 70.00th=[12911], 80.00th=[13435], 90.00th=[23987], 95.00th=[46400], 00:09:29.490 | 99.00th=[60556], 99.50th=[61604], 99.90th=[65274], 99.95th=[65274], 00:09:29.490 | 99.99th=[65274] 00:09:29.490 bw ( KiB/s): min=16384, max=20480, per=23.98%, avg=18432.00, stdev=2896.31, samples=2 00:09:29.490 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:29.490 lat (usec) : 1000=0.01% 00:09:29.490 lat (msec) : 4=0.24%, 10=4.97%, 20=87.20%, 50=6.35%, 100=1.22% 00:09:29.490 cpu : usr=3.69%, sys=5.19%, ctx=484, majf=0, minf=2 00:09:29.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:29.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.490 issued rwts: total=4418,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.490 job3: (groupid=0, jobs=1): err= 0: pid=2384825: Wed Nov 20 17:03:47 2024 00:09:29.490 read: IOPS=5465, BW=21.3MiB/s (22.4MB/s)(21.5MiB/1006msec) 00:09:29.491 slat (nsec): min=1561, max=12223k, avg=101297.01, stdev=747818.30 00:09:29.491 clat (usec): min=935, max=25060, avg=12432.08, stdev=3108.77 00:09:29.491 lat (usec): min=3976, max=26401, avg=12533.38, stdev=3164.84 00:09:29.491 clat percentiles (usec): 00:09:29.491 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10552], 00:09:29.491 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[12256], 00:09:29.491 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17433], 95.00th=[19006], 00:09:29.491 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:09:29.491 | 99.99th=[25035] 00:09:29.491 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:09:29.491 slat (usec): min=2, max=8605, avg=73.25, stdev=384.46 00:09:29.491 clat (usec): min=2864, max=24131, avg=10478.80, stdev=2227.06 00:09:29.491 lat (usec): min=2877, max=24136, avg=10552.04, stdev=2260.32 00:09:29.491 clat percentiles (usec): 00:09:29.491 | 1.00th=[ 4047], 5.00th=[ 5866], 10.00th=[ 7242], 20.00th=[ 9110], 00:09:29.491 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11076], 60.00th=[11207], 00:09:29.491 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13042], 95.00th=[13173], 00:09:29.491 | 99.00th=[14353], 99.50th=[15008], 99.90th=[22152], 99.95th=[23725], 00:09:29.491 | 99.99th=[24249] 00:09:29.491 bw ( KiB/s): min=20480, max=24576, per=29.30%, avg=22528.00, stdev=2896.31, samples=2 00:09:29.491 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:29.491 lat (usec) : 1000=0.01% 00:09:29.491 lat (msec) : 4=0.44%, 10=18.76%, 20=79.07%, 50=1.73% 00:09:29.491 cpu : usr=3.88%, sys=7.06%, ctx=613, majf=0, minf=1 00:09:29.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:29.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.491 issued rwts: total=5498,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.491 00:09:29.491 Run status group 0 (all jobs): 00:09:29.491 READ: bw=71.3MiB/s (74.8MB/s), 15.9MiB/s-21.3MiB/s (16.7MB/s-22.4MB/s), io=71.7MiB (75.2MB), run=1003-1006msec 00:09:29.491 WRITE: bw=75.1MiB/s (78.7MB/s), 17.4MiB/s-21.9MiB/s (18.3MB/s-22.9MB/s), io=75.5MiB (79.2MB), run=1003-1006msec 00:09:29.491 00:09:29.491 Disk stats (read/write): 00:09:29.491 nvme0n1: ios=3112/3583, merge=0/0, ticks=27582/26123, in_queue=53705, util=96.49% 00:09:29.491 nvme0n2: ios=3451/3584, merge=0/0, ticks=46550/55181, in_queue=101731, util=96.29% 00:09:29.491 nvme0n3: ios=3584/3777, merge=0/0, ticks=11064/14708, in_queue=25772, util=88.82% 00:09:29.491 nvme0n4: ios=4626/4855, merge=0/0, ticks=54315/48065, in_queue=102380, util=100.00% 00:09:29.491 17:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:29.491 17:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2385050 00:09:29.491 17:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:29.491 17:03:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:29.491 [global] 00:09:29.491 thread=1 00:09:29.491 invalidate=1 00:09:29.491 rw=read 00:09:29.491 time_based=1 00:09:29.491 runtime=10 00:09:29.491 ioengine=libaio 00:09:29.491 direct=1 00:09:29.491 bs=4096 00:09:29.491 iodepth=1 00:09:29.491 norandommap=1 00:09:29.491 numjobs=1 00:09:29.491 00:09:29.491 [job0] 00:09:29.491 filename=/dev/nvme0n1 00:09:29.491 [job1] 00:09:29.491 filename=/dev/nvme0n2 00:09:29.491 [job2] 00:09:29.491 filename=/dev/nvme0n3 00:09:29.491 [job3] 00:09:29.491 filename=/dev/nvme0n4 00:09:29.491 Could not set queue depth (nvme0n1) 00:09:29.491 Could not set queue depth (nvme0n2) 00:09:29.491 Could not set queue depth (nvme0n3) 00:09:29.491 Could not set queue depth (nvme0n4) 00:09:29.491 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.491 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.491 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.491 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.491 fio-3.35 00:09:29.491 Starting 4 threads 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:32.788 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=19279872, buflen=4096 00:09:32.788 fio: pid=2385291, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:32.788 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42913792, buflen=4096 00:09:32.788 fio: pid=2385284, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.788 17:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:32.788 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=327680, buflen=4096 00:09:32.788 fio: pid=2385244, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.046 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.046 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:33.046 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=360448, buflen=4096 00:09:33.046 fio: pid=2385262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.046 00:09:33.046 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2385244: Wed Nov 20 17:03:51 2024 00:09:33.046 read: IOPS=25, BW=100KiB/s (103kB/s)(320KiB/3185msec) 00:09:33.046 slat (usec): min=9, max=5831, avg=89.93, stdev=645.97 00:09:33.046 clat (usec): min=229, max=41185, avg=39447.23, stdev=7778.54 00:09:33.046 lat (usec): min=251, max=47016, avg=39538.01, stdev=7820.38 00:09:33.046 clat percentiles (usec): 00:09:33.046 | 1.00th=[ 231], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:33.046 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.046 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.046 | 99.99th=[41157] 00:09:33.046 bw ( KiB/s): min= 93, max= 120, per=0.55%, avg=100.83, stdev=10.09, samples=6 00:09:33.046 iops : min= 23, max= 30, avg=25.17, stdev= 2.56, samples=6 00:09:33.047 lat (usec) : 250=2.47%, 500=1.23% 00:09:33.047 lat (msec) : 50=95.06% 00:09:33.047 cpu : usr=0.09%, sys=0.00%, ctx=83, majf=0, minf=1 00:09:33.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.047 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2385262: Wed Nov 20 17:03:51 2024 00:09:33.047 read: IOPS=26, BW=104KiB/s (106kB/s)(352KiB/3393msec) 00:09:33.047 slat (usec): min=10, max=15840, avg=200.41, stdev=1676.73 00:09:33.047 clat (usec): min=213, max=41291, avg=38182.01, stdev=10296.45 00:09:33.047 lat (usec): min=235, max=41313, avg=38384.45, stdev=9752.20 00:09:33.047 clat percentiles (usec): 00:09:33.047 | 1.00th=[ 215], 5.00th=[ 392], 10.00th=[40633], 20.00th=[41157], 00:09:33.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.047 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.047 | 99.99th=[41157] 00:09:33.047 bw ( KiB/s): min= 96, max= 112, per=0.57%, avg=104.67, stdev= 6.41, samples=6 00:09:33.047 iops : min= 24, max= 28, avg=26.17, stdev= 1.60, samples=6 00:09:33.047 lat (usec) : 250=2.25%, 500=4.49% 00:09:33.047 lat (msec) : 50=92.13% 00:09:33.047 cpu : usr=0.12%, sys=0.00%, ctx=91, majf=0, minf=2 00:09:33.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.047 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2385284: Wed Nov 20 17:03:51 2024 00:09:33.047 read: IOPS=3542, BW=13.8MiB/s (14.5MB/s)(40.9MiB/2958msec) 00:09:33.047 slat (nsec): min=6356, max=44940, avg=8019.31, stdev=1648.99 00:09:33.047 clat (usec): min=164, max=41213, avg=270.75, stdev=1590.46 00:09:33.047 lat (usec): min=173, max=41222, avg=278.77, stdev=1590.55 00:09:33.047 clat percentiles (usec): 00:09:33.047 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:33.047 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:33.047 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 247], 00:09:33.047 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[40633], 99.95th=[41157], 00:09:33.047 | 99.99th=[41157] 00:09:33.047 bw ( KiB/s): min= 7920, max=19000, per=75.33%, avg=13633.60, stdev=5000.33, samples=5 00:09:33.047 iops : min= 1980, max= 4750, avg=3408.40, stdev=1250.08, samples=5 00:09:33.047 lat (usec) : 250=96.22%, 500=3.61% 00:09:33.047 lat (msec) : 2=0.01%, 50=0.15% 00:09:33.047 cpu : usr=1.01%, sys=4.84%, ctx=10478, majf=0, minf=2 00:09:33.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 issued rwts: total=10478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.047 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2385291: Wed Nov 20 17:03:51 2024 00:09:33.047 read: IOPS=1722, BW=6889KiB/s (7054kB/s)(18.4MiB/2733msec) 00:09:33.047 slat (nsec): min=6364, max=33325, avg=7757.25, stdev=2067.57 00:09:33.047 clat (usec): min=170, max=42046, avg=567.04, stdev=3751.70 00:09:33.047 lat (usec): min=177, max=42070, avg=574.80, stdev=3752.76 00:09:33.047 clat percentiles (usec): 00:09:33.047 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:33.047 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:09:33.047 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:09:33.047 | 99.00th=[ 388], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:33.047 | 99.99th=[42206] 00:09:33.047 bw ( KiB/s): min= 96, max=11808, per=35.21%, avg=6372.80, stdev=4692.14, samples=5 00:09:33.047 iops : min= 24, max= 2952, avg=1593.20, stdev=1173.03, samples=5 00:09:33.047 lat (usec) : 250=87.11%, 500=11.98%, 750=0.02% 00:09:33.047 lat (msec) : 20=0.02%, 50=0.85% 00:09:33.047 cpu : usr=0.44%, sys=1.65%, ctx=4710, majf=0, minf=1 00:09:33.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.047 issued rwts: total=4708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.047 00:09:33.047 Run status group 0 (all jobs): 00:09:33.047 READ: bw=17.7MiB/s (18.5MB/s), 100KiB/s-13.8MiB/s (103kB/s-14.5MB/s), io=60.0MiB (62.9MB), run=2733-3393msec 00:09:33.047 00:09:33.047 Disk stats (read/write): 00:09:33.047 nvme0n1: ios=78/0, merge=0/0, ticks=3076/0, in_queue=3076, util=95.47% 00:09:33.047 nvme0n2: ios=87/0, merge=0/0, ticks=3322/0, in_queue=3322, util=95.86% 00:09:33.047 nvme0n3: ios=10076/0, merge=0/0, ticks=2675/0, in_queue=2675, util=96.48% 00:09:33.047 nvme0n4: ios=4238/0, merge=0/0, ticks=3788/0, in_queue=3788, util=99.89% 00:09:33.305 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.305 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:33.564 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.564 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:33.823 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.823 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:33.823 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.823 17:03:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:34.081 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:34.081 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2385050 00:09:34.081 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:34.081 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:34.339 nvmf hotplug test: fio failed as expected 00:09:34.339 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.598 rmmod nvme_tcp 00:09:34.598 rmmod nvme_fabrics 00:09:34.598 rmmod nvme_keyring 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2382338 ']' 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2382338 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2382338 ']' 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2382338 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382338 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382338' 00:09:34.598 killing process with pid 2382338 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2382338 00:09:34.598 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2382338 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.857 17:03:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.761 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.761 00:09:36.761 real 0m26.891s 00:09:36.761 user 1m47.199s 00:09:36.761 sys 0m8.479s 00:09:36.761 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.761 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.761 ************************************ 00:09:36.761 END TEST nvmf_fio_target 00:09:36.761 ************************************ 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.021 ************************************ 00:09:37.021 START TEST nvmf_bdevio 00:09:37.021 ************************************ 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.021 * Looking for test storage... 00:09:37.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.021 17:03:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.021 --rc genhtml_branch_coverage=1 00:09:37.021 --rc genhtml_function_coverage=1 00:09:37.021 --rc genhtml_legend=1 00:09:37.021 --rc geninfo_all_blocks=1 00:09:37.021 --rc geninfo_unexecuted_blocks=1 00:09:37.021 00:09:37.021 ' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.021 --rc genhtml_branch_coverage=1 00:09:37.021 --rc genhtml_function_coverage=1 00:09:37.021 --rc genhtml_legend=1 00:09:37.021 --rc geninfo_all_blocks=1 00:09:37.021 --rc geninfo_unexecuted_blocks=1 00:09:37.021 00:09:37.021 ' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.021 --rc genhtml_branch_coverage=1 00:09:37.021 --rc genhtml_function_coverage=1 00:09:37.021 --rc genhtml_legend=1 00:09:37.021 --rc geninfo_all_blocks=1 00:09:37.021 --rc geninfo_unexecuted_blocks=1 00:09:37.021 00:09:37.021 ' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.021 --rc genhtml_branch_coverage=1 00:09:37.021 --rc genhtml_function_coverage=1 00:09:37.021 --rc genhtml_legend=1 00:09:37.021 --rc geninfo_all_blocks=1 00:09:37.021 --rc geninfo_unexecuted_blocks=1 00:09:37.021 00:09:37.021 ' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.021 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.022 17:03:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:43.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:43.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:43.598 Found net devices under 0000:86:00.0: cvl_0_0 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:43.598 Found net devices under 0000:86:00.1: cvl_0_1 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.598 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.599 17:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:09:43.599 00:09:43.599 --- 10.0.0.2 ping statistics --- 00:09:43.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.599 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:43.599 00:09:43.599 --- 10.0.0.1 ping statistics --- 00:09:43.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.599 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2389659 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2389659 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2389659 ']' 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 [2024-11-20 17:04:01.110971] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:09:43.599 [2024-11-20 17:04:01.111019] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.599 [2024-11-20 17:04:01.192172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.599 [2024-11-20 17:04:01.234065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.599 [2024-11-20 17:04:01.234101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.599 [2024-11-20 17:04:01.234108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.599 [2024-11-20 17:04:01.234114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.599 [2024-11-20 17:04:01.234119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.599 [2024-11-20 17:04:01.235757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.599 [2024-11-20 17:04:01.235868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.599 [2024-11-20 17:04:01.235975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.599 [2024-11-20 17:04:01.235976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 [2024-11-20 17:04:01.373067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 Malloc0 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.599 [2024-11-20 17:04:01.436265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.599 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.600 { 00:09:43.600 "params": { 00:09:43.600 "name": "Nvme$subsystem", 00:09:43.600 "trtype": "$TEST_TRANSPORT", 00:09:43.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.600 "adrfam": "ipv4", 00:09:43.600 "trsvcid": "$NVMF_PORT", 00:09:43.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.600 "hdgst": ${hdgst:-false}, 00:09:43.600 "ddgst": ${ddgst:-false} 00:09:43.600 }, 00:09:43.600 "method": "bdev_nvme_attach_controller" 00:09:43.600 } 00:09:43.600 EOF 00:09:43.600 )") 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:43.600 17:04:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.600 "params": { 00:09:43.600 "name": "Nvme1", 00:09:43.600 "trtype": "tcp", 00:09:43.600 "traddr": "10.0.0.2", 00:09:43.600 "adrfam": "ipv4", 00:09:43.600 "trsvcid": "4420", 00:09:43.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.600 "hdgst": false, 00:09:43.600 "ddgst": false 00:09:43.600 }, 00:09:43.600 "method": "bdev_nvme_attach_controller" 00:09:43.600 }' 00:09:43.600 [2024-11-20 17:04:01.486925] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:09:43.600 [2024-11-20 17:04:01.486971] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389690 ] 00:09:43.600 [2024-11-20 17:04:01.564627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.600 [2024-11-20 17:04:01.608242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.600 [2024-11-20 17:04:01.608273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.600 [2024-11-20 17:04:01.608273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.858 I/O targets: 00:09:43.858 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:43.858 00:09:43.858 00:09:43.858 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.858 http://cunit.sourceforge.net/ 00:09:43.858 00:09:43.858 00:09:43.858 Suite: bdevio tests on: Nvme1n1 00:09:43.858 Test: blockdev write read block ...passed 00:09:43.858 Test: blockdev write zeroes read block ...passed 00:09:43.858 Test: blockdev write zeroes read no split ...passed 00:09:43.858 Test: blockdev write zeroes read split ...passed 00:09:44.115 Test: blockdev write zeroes read split partial ...passed 00:09:44.115 Test: blockdev reset ...[2024-11-20 17:04:01.922717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:44.115 [2024-11-20 17:04:01.922780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2465340 (9): Bad file descriptor 00:09:44.116 [2024-11-20 17:04:02.024143] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:44.116 passed 00:09:44.116 Test: blockdev write read 8 blocks ...passed 00:09:44.116 Test: blockdev write read size > 128k ...passed 00:09:44.116 Test: blockdev write read invalid size ...passed 00:09:44.116 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.116 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.116 Test: blockdev write read max offset ...passed 00:09:44.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.116 Test: blockdev writev readv 8 blocks ...passed 00:09:44.116 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.374 Test: blockdev writev readv block ...passed 00:09:44.374 Test: blockdev writev readv size > 128k ...passed 00:09:44.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.374 Test: blockdev comparev and writev ...[2024-11-20 17:04:02.194035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.194896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.374 [2024-11-20 17:04:02.194902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:44.374 passed 00:09:44.374 Test: blockdev nvme passthru rw ...passed 00:09:44.374 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:04:02.277621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.374 [2024-11-20 17:04:02.277636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.277741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.374 [2024-11-20 17:04:02.277751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.277859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.374 [2024-11-20 17:04:02.277868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:44.374 [2024-11-20 17:04:02.277964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.374 [2024-11-20 17:04:02.277973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:44.374 passed 00:09:44.374 Test: blockdev nvme admin passthru ...passed 00:09:44.374 Test: blockdev copy ...passed 00:09:44.374 00:09:44.374 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.374 suites 1 1 n/a 0 0 00:09:44.374 tests 23 23 23 0 0 00:09:44.374 asserts 152 152 152 0 n/a 00:09:44.374 00:09:44.374 Elapsed time = 1.148 seconds 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.632 rmmod nvme_tcp 00:09:44.632 rmmod nvme_fabrics 00:09:44.632 rmmod nvme_keyring 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2389659 ']' 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2389659 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2389659 ']' 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2389659 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2389659 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2389659' 00:09:44.632 killing process with pid 2389659 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2389659 00:09:44.632 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2389659 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.891 17:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.429 00:09:47.429 real 0m10.010s 00:09:47.429 user 0m9.818s 00:09:47.429 sys 0m5.030s 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 ************************************ 00:09:47.429 END TEST nvmf_bdevio 00:09:47.429 ************************************ 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:47.429 00:09:47.429 real 4m37.070s 00:09:47.429 user 10m32.786s 00:09:47.429 sys 1m39.454s 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.429 17:04:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 ************************************ 00:09:47.429 END TEST nvmf_target_core 00:09:47.429 ************************************ 00:09:47.429 17:04:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.429 17:04:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.429 17:04:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.429 17:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 ************************************ 00:09:47.429 START TEST nvmf_target_extra 00:09:47.430 ************************************ 00:09:47.430 17:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.430 * Looking for test storage... 00:09:47.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.430 --rc genhtml_branch_coverage=1 00:09:47.430 --rc genhtml_function_coverage=1 00:09:47.430 --rc genhtml_legend=1 00:09:47.430 --rc geninfo_all_blocks=1 00:09:47.430 --rc geninfo_unexecuted_blocks=1 00:09:47.430 00:09:47.430 ' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.430 --rc genhtml_branch_coverage=1 00:09:47.430 --rc genhtml_function_coverage=1 00:09:47.430 --rc genhtml_legend=1 00:09:47.430 --rc geninfo_all_blocks=1 00:09:47.430 --rc geninfo_unexecuted_blocks=1 00:09:47.430 00:09:47.430 ' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.430 --rc genhtml_branch_coverage=1 00:09:47.430 --rc genhtml_function_coverage=1 00:09:47.430 --rc genhtml_legend=1 00:09:47.430 --rc geninfo_all_blocks=1 00:09:47.430 --rc geninfo_unexecuted_blocks=1 00:09:47.430 00:09:47.430 ' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.430 --rc genhtml_branch_coverage=1 00:09:47.430 --rc genhtml_function_coverage=1 00:09:47.430 --rc genhtml_legend=1 00:09:47.430 --rc geninfo_all_blocks=1 00:09:47.430 --rc geninfo_unexecuted_blocks=1 00:09:47.430 00:09:47.430 ' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:47.430 ************************************ 00:09:47.430 START TEST nvmf_example 00:09:47.430 ************************************ 00:09:47.430 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.431 * Looking for test storage... 00:09:47.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.431 --rc genhtml_branch_coverage=1 00:09:47.431 --rc genhtml_function_coverage=1 00:09:47.431 --rc genhtml_legend=1 00:09:47.431 --rc geninfo_all_blocks=1 00:09:47.431 --rc geninfo_unexecuted_blocks=1 00:09:47.431 00:09:47.431 ' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.431 --rc genhtml_branch_coverage=1 00:09:47.431 --rc genhtml_function_coverage=1 00:09:47.431 --rc genhtml_legend=1 00:09:47.431 --rc geninfo_all_blocks=1 00:09:47.431 --rc geninfo_unexecuted_blocks=1 00:09:47.431 00:09:47.431 ' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.431 --rc genhtml_branch_coverage=1 00:09:47.431 --rc genhtml_function_coverage=1 00:09:47.431 --rc genhtml_legend=1 00:09:47.431 --rc geninfo_all_blocks=1 00:09:47.431 --rc geninfo_unexecuted_blocks=1 00:09:47.431 00:09:47.431 ' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.431 --rc genhtml_branch_coverage=1 00:09:47.431 --rc genhtml_function_coverage=1 00:09:47.431 --rc genhtml_legend=1 00:09:47.431 --rc geninfo_all_blocks=1 00:09:47.431 --rc geninfo_unexecuted_blocks=1 00:09:47.431 00:09:47.431 ' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:47.431 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.432 17:04:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.003 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:54.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:54.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:54.004 Found net devices under 0000:86:00.0: cvl_0_0 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:54.004 Found net devices under 0000:86:00.1: cvl_0_1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:09:54.004 00:09:54.004 --- 10.0.0.2 ping statistics --- 00:09:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.004 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:54.004 00:09:54.004 --- 10.0.0.1 ping statistics --- 00:09:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.004 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.004 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2393512 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2393512 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2393512 ']' 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.005 17:04:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:54.573 17:04:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:06.784 Initializing NVMe Controllers 00:10:06.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:06.784 Initialization complete. Launching workers. 00:10:06.784 ======================================================== 00:10:06.784 Latency(us) 00:10:06.784 Device Information : IOPS MiB/s Average min max 00:10:06.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18428.48 71.99 3472.33 681.26 16416.34 00:10:06.784 ======================================================== 00:10:06.784 Total : 18428.48 71.99 3472.33 681.26 16416.34 00:10:06.784 00:10:06.784 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:06.784 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:06.784 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.785 rmmod nvme_tcp 00:10:06.785 rmmod nvme_fabrics 00:10:06.785 rmmod nvme_keyring 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2393512 ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2393512 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2393512 ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2393512 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393512 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393512' 00:10:06.785 killing process with pid 2393512 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2393512 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2393512 00:10:06.785 nvmf threads initialize successfully 00:10:06.785 bdev subsystem init successfully 00:10:06.785 created a nvmf target service 00:10:06.785 create targets's poll groups done 00:10:06.785 all subsystems of target started 00:10:06.785 nvmf target is running 00:10:06.785 all subsystems of target stopped 00:10:06.785 destroy targets's poll groups done 00:10:06.785 destroyed the nvmf target service 00:10:06.785 bdev subsystem finish successfully 00:10:06.785 nvmf threads destroy successfully 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.785 17:04:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 00:10:07.044 real 0m19.856s 00:10:07.044 user 0m46.018s 00:10:07.044 sys 0m6.117s 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.044 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 ************************************ 00:10:07.044 END TEST nvmf_example 00:10:07.044 ************************************ 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.305 ************************************ 00:10:07.305 START TEST nvmf_filesystem 00:10:07.305 ************************************ 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.305 * Looking for test storage... 00:10:07.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.305 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.306 --rc genhtml_branch_coverage=1 00:10:07.306 --rc genhtml_function_coverage=1 00:10:07.306 --rc genhtml_legend=1 00:10:07.306 --rc geninfo_all_blocks=1 00:10:07.306 --rc geninfo_unexecuted_blocks=1 00:10:07.306 00:10:07.306 ' 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.306 --rc genhtml_branch_coverage=1 00:10:07.306 --rc genhtml_function_coverage=1 00:10:07.306 --rc genhtml_legend=1 00:10:07.306 --rc geninfo_all_blocks=1 00:10:07.306 --rc geninfo_unexecuted_blocks=1 00:10:07.306 00:10:07.306 ' 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.306 --rc genhtml_branch_coverage=1 00:10:07.306 --rc genhtml_function_coverage=1 00:10:07.306 --rc genhtml_legend=1 00:10:07.306 --rc geninfo_all_blocks=1 00:10:07.306 --rc geninfo_unexecuted_blocks=1 00:10:07.306 00:10:07.306 ' 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.306 --rc genhtml_branch_coverage=1 00:10:07.306 --rc genhtml_function_coverage=1 00:10:07.306 --rc genhtml_legend=1 00:10:07.306 --rc geninfo_all_blocks=1 00:10:07.306 --rc geninfo_unexecuted_blocks=1 00:10:07.306 00:10:07.306 ' 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:07.306 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:07.307 #define SPDK_CONFIG_H 00:10:07.307 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:07.307 #define SPDK_CONFIG_APPS 1 00:10:07.307 #define SPDK_CONFIG_ARCH native 00:10:07.307 #undef SPDK_CONFIG_ASAN 00:10:07.307 #undef SPDK_CONFIG_AVAHI 00:10:07.307 #undef SPDK_CONFIG_CET 00:10:07.307 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:07.307 #define SPDK_CONFIG_COVERAGE 1 00:10:07.307 #define SPDK_CONFIG_CROSS_PREFIX 00:10:07.307 #undef SPDK_CONFIG_CRYPTO 00:10:07.307 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:07.307 #undef SPDK_CONFIG_CUSTOMOCF 00:10:07.307 #undef SPDK_CONFIG_DAOS 00:10:07.307 #define SPDK_CONFIG_DAOS_DIR 00:10:07.307 #define SPDK_CONFIG_DEBUG 1 00:10:07.307 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:07.307 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.307 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:07.307 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:07.307 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:07.307 #undef SPDK_CONFIG_DPDK_UADK 00:10:07.307 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.307 #define SPDK_CONFIG_EXAMPLES 1 00:10:07.307 #undef SPDK_CONFIG_FC 00:10:07.307 #define SPDK_CONFIG_FC_PATH 00:10:07.307 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:07.307 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:07.307 #define SPDK_CONFIG_FSDEV 1 00:10:07.307 #undef SPDK_CONFIG_FUSE 00:10:07.307 #undef SPDK_CONFIG_FUZZER 00:10:07.307 #define SPDK_CONFIG_FUZZER_LIB 00:10:07.307 #undef SPDK_CONFIG_GOLANG 00:10:07.307 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:07.307 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:07.307 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:07.307 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:07.307 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:07.307 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:07.307 #undef SPDK_CONFIG_HAVE_LZ4 00:10:07.307 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:07.307 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:07.307 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:07.307 #define SPDK_CONFIG_IDXD 1 00:10:07.307 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:07.307 #undef SPDK_CONFIG_IPSEC_MB 00:10:07.307 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:07.307 #define SPDK_CONFIG_ISAL 1 00:10:07.307 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:07.307 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:07.307 #define SPDK_CONFIG_LIBDIR 00:10:07.307 #undef SPDK_CONFIG_LTO 00:10:07.307 #define SPDK_CONFIG_MAX_LCORES 128 00:10:07.307 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:07.307 #define SPDK_CONFIG_NVME_CUSE 1 00:10:07.307 #undef SPDK_CONFIG_OCF 00:10:07.307 #define SPDK_CONFIG_OCF_PATH 00:10:07.307 #define SPDK_CONFIG_OPENSSL_PATH 00:10:07.307 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:07.307 #define SPDK_CONFIG_PGO_DIR 00:10:07.307 #undef SPDK_CONFIG_PGO_USE 00:10:07.307 #define SPDK_CONFIG_PREFIX /usr/local 00:10:07.307 #undef SPDK_CONFIG_RAID5F 00:10:07.307 #undef SPDK_CONFIG_RBD 00:10:07.307 #define SPDK_CONFIG_RDMA 1 00:10:07.307 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:07.307 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:07.307 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:07.307 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:07.307 #define SPDK_CONFIG_SHARED 1 00:10:07.307 #undef SPDK_CONFIG_SMA 00:10:07.307 #define SPDK_CONFIG_TESTS 1 00:10:07.307 #undef SPDK_CONFIG_TSAN 00:10:07.307 #define SPDK_CONFIG_UBLK 1 00:10:07.307 #define SPDK_CONFIG_UBSAN 1 00:10:07.307 #undef SPDK_CONFIG_UNIT_TESTS 00:10:07.307 #undef SPDK_CONFIG_URING 00:10:07.307 #define SPDK_CONFIG_URING_PATH 00:10:07.307 #undef SPDK_CONFIG_URING_ZNS 00:10:07.307 #undef SPDK_CONFIG_USDT 00:10:07.307 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:07.307 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:07.307 #define SPDK_CONFIG_VFIO_USER 1 00:10:07.307 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:07.307 #define SPDK_CONFIG_VHOST 1 00:10:07.307 #define SPDK_CONFIG_VIRTIO 1 00:10:07.307 #undef SPDK_CONFIG_VTUNE 00:10:07.307 #define SPDK_CONFIG_VTUNE_DIR 00:10:07.307 #define SPDK_CONFIG_WERROR 1 00:10:07.307 #define SPDK_CONFIG_WPDK_DIR 00:10:07.307 #undef SPDK_CONFIG_XNVME 00:10:07.307 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.307 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:07.574 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.575 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2395917 ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2395917 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.6XESes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6XESes/tests/target /tmp/spdk.6XESes 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:07.576 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189120348160 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6843625472 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981202432 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=786432 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:07.577 * Looking for test storage... 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189120348160 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9058217984 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.577 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.578 --rc genhtml_branch_coverage=1 00:10:07.578 --rc genhtml_function_coverage=1 00:10:07.578 --rc genhtml_legend=1 00:10:07.578 --rc geninfo_all_blocks=1 00:10:07.578 --rc geninfo_unexecuted_blocks=1 00:10:07.578 00:10:07.578 ' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.578 17:04:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.364 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.365 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.365 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:10:14.365 00:10:14.365 --- 10.0.0.2 ping statistics --- 00:10:14.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.365 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:10:14.365 00:10:14.365 --- 10.0.0.1 ping statistics --- 00:10:14.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.365 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.365 ************************************ 00:10:14.365 START TEST nvmf_filesystem_no_in_capsule 00:10:14.365 ************************************ 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:14.365 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2399176 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2399176 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2399176 ']' 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 [2024-11-20 17:04:31.754027] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:10:14.366 [2024-11-20 17:04:31.754073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.366 [2024-11-20 17:04:31.836514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.366 [2024-11-20 17:04:31.879160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.366 [2024-11-20 17:04:31.879195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.366 [2024-11-20 17:04:31.879206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.366 [2024-11-20 17:04:31.879212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.366 [2024-11-20 17:04:31.879217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.366 [2024-11-20 17:04:31.880702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.366 [2024-11-20 17:04:31.880814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.366 [2024-11-20 17:04:31.880920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.366 [2024-11-20 17:04:31.880921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.366 17:04:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 [2024-11-20 17:04:32.017878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 [2024-11-20 17:04:32.171922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.366 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:14.366 { 00:10:14.366 "name": "Malloc1", 00:10:14.366 "aliases": [ 00:10:14.366 "b7e1720b-bf94-4985-ac36-447213129fa1" 00:10:14.366 ], 00:10:14.366 "product_name": "Malloc disk", 00:10:14.366 "block_size": 512, 00:10:14.366 "num_blocks": 1048576, 00:10:14.366 "uuid": "b7e1720b-bf94-4985-ac36-447213129fa1", 00:10:14.366 "assigned_rate_limits": { 00:10:14.366 "rw_ios_per_sec": 0, 00:10:14.366 "rw_mbytes_per_sec": 0, 00:10:14.366 "r_mbytes_per_sec": 0, 00:10:14.367 "w_mbytes_per_sec": 0 00:10:14.367 }, 00:10:14.367 "claimed": true, 00:10:14.367 "claim_type": "exclusive_write", 00:10:14.367 "zoned": false, 00:10:14.367 "supported_io_types": { 00:10:14.367 "read": true, 00:10:14.367 "write": true, 00:10:14.367 "unmap": true, 00:10:14.367 "flush": true, 00:10:14.367 "reset": true, 00:10:14.367 "nvme_admin": false, 00:10:14.367 "nvme_io": false, 00:10:14.367 "nvme_io_md": false, 00:10:14.367 "write_zeroes": true, 00:10:14.367 "zcopy": true, 00:10:14.367 "get_zone_info": false, 00:10:14.367 "zone_management": false, 00:10:14.367 "zone_append": false, 00:10:14.367 "compare": false, 00:10:14.367 "compare_and_write": false, 00:10:14.367 "abort": true, 00:10:14.367 "seek_hole": false, 00:10:14.367 "seek_data": false, 00:10:14.367 "copy": true, 00:10:14.367 "nvme_iov_md": false 00:10:14.367 }, 00:10:14.367 "memory_domains": [ 00:10:14.367 { 00:10:14.367 "dma_device_id": "system", 00:10:14.367 "dma_device_type": 1 00:10:14.367 }, 00:10:14.367 { 00:10:14.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.367 "dma_device_type": 2 00:10:14.367 } 00:10:14.367 ], 00:10:14.367 "driver_specific": {} 00:10:14.367 } 00:10:14.367 ]' 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:14.367 17:04:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.743 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.743 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:15.743 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.743 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:15.743 17:04:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:17.648 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.907 17:04:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:18.474 17:04:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.411 ************************************ 00:10:19.411 START TEST filesystem_ext4 00:10:19.411 ************************************ 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:19.411 17:04:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:19.411 mke2fs 1.47.0 (5-Feb-2023) 00:10:19.670 Discarding device blocks: 0/522240 done 00:10:19.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:19.670 Filesystem UUID: deab3582-e963-4055-80e5-024e2fb4c543 00:10:19.670 Superblock backups stored on blocks: 00:10:19.670 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:19.670 00:10:19.670 Allocating group tables: 0/64 done 00:10:19.670 Writing inode tables: 0/64 done 00:10:20.237 Creating journal (8192 blocks): done 00:10:20.237 Writing superblocks and filesystem accounting information: 0/64 done 00:10:20.237 00:10:20.237 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:20.237 17:04:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2399176 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.506 00:10:25.506 real 0m6.001s 00:10:25.506 user 0m0.024s 00:10:25.506 sys 0m0.075s 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:25.506 ************************************ 00:10:25.506 END TEST filesystem_ext4 00:10:25.506 ************************************ 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.506 ************************************ 00:10:25.506 START TEST filesystem_btrfs 00:10:25.506 ************************************ 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:25.506 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:25.765 btrfs-progs v6.8.1 00:10:25.765 See https://btrfs.readthedocs.io for more information. 00:10:25.765 00:10:25.765 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:25.765 NOTE: several default settings have changed in version 5.15, please make sure 00:10:25.765 this does not affect your deployments: 00:10:25.765 - DUP for metadata (-m dup) 00:10:25.765 - enabled no-holes (-O no-holes) 00:10:25.765 - enabled free-space-tree (-R free-space-tree) 00:10:25.765 00:10:25.765 Label: (null) 00:10:25.765 UUID: fb9789f4-244d-4655-952c-eeb70d657cc2 00:10:25.765 Node size: 16384 00:10:25.765 Sector size: 4096 (CPU page size: 4096) 00:10:25.765 Filesystem size: 510.00MiB 00:10:25.765 Block group profiles: 00:10:25.765 Data: single 8.00MiB 00:10:25.765 Metadata: DUP 32.00MiB 00:10:25.765 System: DUP 8.00MiB 00:10:25.765 SSD detected: yes 00:10:25.765 Zoned device: no 00:10:25.765 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:25.765 Checksum: crc32c 00:10:25.765 Number of devices: 1 00:10:25.765 Devices: 00:10:25.765 ID SIZE PATH 00:10:25.765 1 510.00MiB /dev/nvme0n1p1 00:10:25.765 00:10:25.765 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:25.765 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2399176 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.025 00:10:26.025 real 0m0.456s 00:10:26.025 user 0m0.036s 00:10:26.025 sys 0m0.105s 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.025 ************************************ 00:10:26.025 END TEST filesystem_btrfs 00:10:26.025 ************************************ 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.025 17:04:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.025 ************************************ 00:10:26.025 START TEST filesystem_xfs 00:10:26.025 ************************************ 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.025 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:26.283 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:26.283 = sectsz=512 attr=2, projid32bit=1 00:10:26.283 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:26.283 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:26.283 data = bsize=4096 blocks=130560, imaxpct=25 00:10:26.283 = sunit=0 swidth=0 blks 00:10:26.283 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:26.283 log =internal log bsize=4096 blocks=16384, version=2 00:10:26.283 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:26.283 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:26.850 Discarding blocks...Done. 00:10:26.850 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.850 17:04:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2399176 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:29.385 00:10:29.385 real 0m3.133s 00:10:29.385 user 0m0.021s 00:10:29.385 sys 0m0.078s 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:29.385 ************************************ 00:10:29.385 END TEST filesystem_xfs 00:10:29.385 ************************************ 00:10:29.385 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2399176 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2399176 ']' 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2399176 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399176 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399176' 00:10:29.643 killing process with pid 2399176 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2399176 00:10:29.643 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2399176 00:10:30.211 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:30.211 00:10:30.211 real 0m16.285s 00:10:30.211 user 1m4.051s 00:10:30.211 sys 0m1.365s 00:10:30.211 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.211 17:04:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 ************************************ 00:10:30.211 END TEST nvmf_filesystem_no_in_capsule 00:10:30.211 ************************************ 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 ************************************ 00:10:30.211 START TEST nvmf_filesystem_in_capsule 00:10:30.211 ************************************ 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2401994 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2401994 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2401994 ']' 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.211 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 [2024-11-20 17:04:48.117644] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:10:30.211 [2024-11-20 17:04:48.117687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.211 [2024-11-20 17:04:48.178518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.211 [2024-11-20 17:04:48.221028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.211 [2024-11-20 17:04:48.221067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.211 [2024-11-20 17:04:48.221074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.211 [2024-11-20 17:04:48.221080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.211 [2024-11-20 17:04:48.221085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.211 [2024-11-20 17:04:48.222747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.211 [2024-11-20 17:04:48.222864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.211 [2024-11-20 17:04:48.222971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.211 [2024-11-20 17:04:48.222972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.469 [2024-11-20 17:04:48.361343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.469 Malloc1 00:10:30.469 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.470 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.727 [2024-11-20 17:04:48.525351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:30.727 { 00:10:30.727 "name": "Malloc1", 00:10:30.727 "aliases": [ 00:10:30.727 "8cc2adf6-8103-4338-8e22-57d3353a92bc" 00:10:30.727 ], 00:10:30.727 "product_name": "Malloc disk", 00:10:30.727 "block_size": 512, 00:10:30.727 "num_blocks": 1048576, 00:10:30.727 "uuid": "8cc2adf6-8103-4338-8e22-57d3353a92bc", 00:10:30.727 "assigned_rate_limits": { 00:10:30.727 "rw_ios_per_sec": 0, 00:10:30.727 "rw_mbytes_per_sec": 0, 00:10:30.727 "r_mbytes_per_sec": 0, 00:10:30.727 "w_mbytes_per_sec": 0 00:10:30.727 }, 00:10:30.727 "claimed": true, 00:10:30.727 "claim_type": "exclusive_write", 00:10:30.727 "zoned": false, 00:10:30.727 "supported_io_types": { 00:10:30.727 "read": true, 00:10:30.727 "write": true, 00:10:30.727 "unmap": true, 00:10:30.727 "flush": true, 00:10:30.727 "reset": true, 00:10:30.727 "nvme_admin": false, 00:10:30.727 "nvme_io": false, 00:10:30.727 "nvme_io_md": false, 00:10:30.727 "write_zeroes": true, 00:10:30.727 "zcopy": true, 00:10:30.727 "get_zone_info": false, 00:10:30.727 "zone_management": false, 00:10:30.727 "zone_append": false, 00:10:30.727 "compare": false, 00:10:30.727 "compare_and_write": false, 00:10:30.727 "abort": true, 00:10:30.727 "seek_hole": false, 00:10:30.727 "seek_data": false, 00:10:30.727 "copy": true, 00:10:30.727 "nvme_iov_md": false 00:10:30.727 }, 00:10:30.727 "memory_domains": [ 00:10:30.727 { 00:10:30.727 "dma_device_id": "system", 00:10:30.727 "dma_device_type": 1 00:10:30.727 }, 00:10:30.727 { 00:10:30.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.727 "dma_device_type": 2 00:10:30.727 } 00:10:30.727 ], 00:10:30.727 "driver_specific": {} 00:10:30.727 } 00:10:30.727 ]' 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:30.727 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:30.728 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:30.728 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:30.728 17:04:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.103 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.103 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.104 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.104 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.104 17:04:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:34.004 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:34.004 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:34.005 17:04:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:34.005 17:04:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.382 ************************************ 00:10:35.382 START TEST filesystem_in_capsule_ext4 00:10:35.382 ************************************ 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:35.382 17:04:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:35.382 mke2fs 1.47.0 (5-Feb-2023) 00:10:35.382 Discarding device blocks: 0/522240 done 00:10:35.382 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:35.382 Filesystem UUID: 226a7d70-a50d-4d74-9c5b-db42312a15fa 00:10:35.382 Superblock backups stored on blocks: 00:10:35.383 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:35.383 00:10:35.383 Allocating group tables: 0/64 done 00:10:35.383 Writing inode tables: 0/64 done 00:10:38.672 Creating journal (8192 blocks): done 00:10:38.672 Writing superblocks and filesystem accounting information: 0/64 done 00:10:38.672 00:10:38.672 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:38.672 17:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2401994 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.942 00:10:43.942 real 0m8.579s 00:10:43.942 user 0m0.022s 00:10:43.942 sys 0m0.077s 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:43.942 ************************************ 00:10:43.942 END TEST filesystem_in_capsule_ext4 00:10:43.942 ************************************ 00:10:43.942 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.943 ************************************ 00:10:43.943 START TEST filesystem_in_capsule_btrfs 00:10:43.943 ************************************ 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:43.943 btrfs-progs v6.8.1 00:10:43.943 See https://btrfs.readthedocs.io for more information. 00:10:43.943 00:10:43.943 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:43.943 NOTE: several default settings have changed in version 5.15, please make sure 00:10:43.943 this does not affect your deployments: 00:10:43.943 - DUP for metadata (-m dup) 00:10:43.943 - enabled no-holes (-O no-holes) 00:10:43.943 - enabled free-space-tree (-R free-space-tree) 00:10:43.943 00:10:43.943 Label: (null) 00:10:43.943 UUID: 7861f518-08ed-420c-940e-680224ae4c75 00:10:43.943 Node size: 16384 00:10:43.943 Sector size: 4096 (CPU page size: 4096) 00:10:43.943 Filesystem size: 510.00MiB 00:10:43.943 Block group profiles: 00:10:43.943 Data: single 8.00MiB 00:10:43.943 Metadata: DUP 32.00MiB 00:10:43.943 System: DUP 8.00MiB 00:10:43.943 SSD detected: yes 00:10:43.943 Zoned device: no 00:10:43.943 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:43.943 Checksum: crc32c 00:10:43.943 Number of devices: 1 00:10:43.943 Devices: 00:10:43.943 ID SIZE PATH 00:10:43.943 1 510.00MiB /dev/nvme0n1p1 00:10:43.943 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.943 17:05:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.201 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.201 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:44.201 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.201 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2401994 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.202 00:10:44.202 real 0m0.414s 00:10:44.202 user 0m0.027s 00:10:44.202 sys 0m0.119s 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 ************************************ 00:10:44.202 END TEST filesystem_in_capsule_btrfs 00:10:44.202 ************************************ 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.202 ************************************ 00:10:44.202 START TEST filesystem_in_capsule_xfs 00:10:44.202 ************************************ 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:44.202 17:05:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:44.460 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:44.460 = sectsz=512 attr=2, projid32bit=1 00:10:44.460 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:44.460 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:44.460 data = bsize=4096 blocks=130560, imaxpct=25 00:10:44.460 = sunit=0 swidth=0 blks 00:10:44.460 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:44.460 log =internal log bsize=4096 blocks=16384, version=2 00:10:44.460 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:44.460 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:45.395 Discarding blocks...Done. 00:10:45.395 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:45.395 17:05:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2401994 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.928 00:10:47.928 real 0m3.205s 00:10:47.928 user 0m0.025s 00:10:47.928 sys 0m0.074s 00:10:47.928 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.929 ************************************ 00:10:47.929 END TEST filesystem_in_capsule_xfs 00:10:47.929 ************************************ 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2401994 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2401994 ']' 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2401994 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401994 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401994' 00:10:47.929 killing process with pid 2401994 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2401994 00:10:47.929 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2401994 00:10:48.188 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.188 00:10:48.188 real 0m17.931s 00:10:48.188 user 1m10.607s 00:10:48.188 sys 0m1.427s 00:10:48.188 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.188 17:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.188 ************************************ 00:10:48.188 END TEST nvmf_filesystem_in_capsule 00:10:48.188 ************************************ 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.188 rmmod nvme_tcp 00:10:48.188 rmmod nvme_fabrics 00:10:48.188 rmmod nvme_keyring 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.188 17:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.733 00:10:50.733 real 0m43.054s 00:10:50.733 user 2m16.661s 00:10:50.733 sys 0m7.632s 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.733 ************************************ 00:10:50.733 END TEST nvmf_filesystem 00:10:50.733 ************************************ 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.733 ************************************ 00:10:50.733 START TEST nvmf_target_discovery 00:10:50.733 ************************************ 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:50.733 * Looking for test storage... 00:10:50.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.733 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.734 --rc genhtml_branch_coverage=1 00:10:50.734 --rc genhtml_function_coverage=1 00:10:50.734 --rc genhtml_legend=1 00:10:50.734 --rc geninfo_all_blocks=1 00:10:50.734 --rc geninfo_unexecuted_blocks=1 00:10:50.734 00:10:50.734 ' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.734 --rc genhtml_branch_coverage=1 00:10:50.734 --rc genhtml_function_coverage=1 00:10:50.734 --rc genhtml_legend=1 00:10:50.734 --rc geninfo_all_blocks=1 00:10:50.734 --rc geninfo_unexecuted_blocks=1 00:10:50.734 00:10:50.734 ' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.734 --rc genhtml_branch_coverage=1 00:10:50.734 --rc genhtml_function_coverage=1 00:10:50.734 --rc genhtml_legend=1 00:10:50.734 --rc geninfo_all_blocks=1 00:10:50.734 --rc geninfo_unexecuted_blocks=1 00:10:50.734 00:10:50.734 ' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.734 --rc genhtml_branch_coverage=1 00:10:50.734 --rc genhtml_function_coverage=1 00:10:50.734 --rc genhtml_legend=1 00:10:50.734 --rc geninfo_all_blocks=1 00:10:50.734 --rc geninfo_unexecuted_blocks=1 00:10:50.734 00:10:50.734 ' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.734 17:05:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.309 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:57.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:57.310 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:57.310 Found net devices under 0000:86:00.0: cvl_0_0 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:57.310 Found net devices under 0000:86:00.1: cvl_0_1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:10:57.310 00:10:57.310 --- 10.0.0.2 ping statistics --- 00:10:57.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.310 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:10:57.310 00:10:57.310 --- 10.0.0.1 ping statistics --- 00:10:57.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.310 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2408687 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2408687 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2408687 ']' 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.310 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 [2024-11-20 17:05:14.492156] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:10:57.311 [2024-11-20 17:05:14.492198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.311 [2024-11-20 17:05:14.551686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.311 [2024-11-20 17:05:14.591949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.311 [2024-11-20 17:05:14.591984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.311 [2024-11-20 17:05:14.591992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.311 [2024-11-20 17:05:14.591999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.311 [2024-11-20 17:05:14.592003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.311 [2024-11-20 17:05:14.593446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.311 [2024-11-20 17:05:14.593552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.311 [2024-11-20 17:05:14.593659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.311 [2024-11-20 17:05:14.593661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 [2024-11-20 17:05:14.743366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 Null1 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 [2024-11-20 17:05:14.800359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 Null2 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 Null3 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 Null4 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:57.311 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:57.312 00:10:57.312 Discovery Log Number of Records 6, Generation counter 6 00:10:57.312 =====Discovery Log Entry 0====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: current discovery subsystem 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4420 00:10:57.312 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: explicit discovery connections, duplicate discovery information 00:10:57.312 sectype: none 00:10:57.312 =====Discovery Log Entry 1====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: nvme subsystem 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4420 00:10:57.312 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: none 00:10:57.312 sectype: none 00:10:57.312 =====Discovery Log Entry 2====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: nvme subsystem 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4420 00:10:57.312 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: none 00:10:57.312 sectype: none 00:10:57.312 =====Discovery Log Entry 3====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: nvme subsystem 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4420 00:10:57.312 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: none 00:10:57.312 sectype: none 00:10:57.312 =====Discovery Log Entry 4====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: nvme subsystem 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4420 00:10:57.312 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: none 00:10:57.312 sectype: none 00:10:57.312 =====Discovery Log Entry 5====== 00:10:57.312 trtype: tcp 00:10:57.312 adrfam: ipv4 00:10:57.312 subtype: discovery subsystem referral 00:10:57.312 treq: not required 00:10:57.312 portid: 0 00:10:57.312 trsvcid: 4430 00:10:57.312 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:57.312 traddr: 10.0.0.2 00:10:57.312 eflags: none 00:10:57.312 sectype: none 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:57.312 Perform nvmf subsystem discovery via RPC 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 [ 00:10:57.312 { 00:10:57.312 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.312 "subtype": "Discovery", 00:10:57.312 "listen_addresses": [ 00:10:57.312 { 00:10:57.312 "trtype": "TCP", 00:10:57.312 "adrfam": "IPv4", 00:10:57.312 "traddr": "10.0.0.2", 00:10:57.312 "trsvcid": "4420" 00:10:57.312 } 00:10:57.312 ], 00:10:57.312 "allow_any_host": true, 00:10:57.312 "hosts": [] 00:10:57.312 }, 00:10:57.312 { 00:10:57.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.312 "subtype": "NVMe", 00:10:57.312 "listen_addresses": [ 00:10:57.312 { 00:10:57.312 "trtype": "TCP", 00:10:57.312 "adrfam": "IPv4", 00:10:57.312 "traddr": "10.0.0.2", 00:10:57.312 "trsvcid": "4420" 00:10:57.312 } 00:10:57.312 ], 00:10:57.312 "allow_any_host": true, 00:10:57.312 "hosts": [], 00:10:57.312 "serial_number": "SPDK00000000000001", 00:10:57.312 "model_number": "SPDK bdev Controller", 00:10:57.312 "max_namespaces": 32, 00:10:57.312 "min_cntlid": 1, 00:10:57.312 "max_cntlid": 65519, 00:10:57.312 "namespaces": [ 00:10:57.312 { 00:10:57.312 "nsid": 1, 00:10:57.312 "bdev_name": "Null1", 00:10:57.312 "name": "Null1", 00:10:57.312 "nguid": "F26D1A997E024193BD9A6664957EDAC3", 00:10:57.312 "uuid": "f26d1a99-7e02-4193-bd9a-6664957edac3" 00:10:57.312 } 00:10:57.312 ] 00:10:57.312 }, 00:10:57.312 { 00:10:57.312 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:57.312 "subtype": "NVMe", 00:10:57.312 "listen_addresses": [ 00:10:57.312 { 00:10:57.312 "trtype": "TCP", 00:10:57.312 "adrfam": "IPv4", 00:10:57.312 "traddr": "10.0.0.2", 00:10:57.312 "trsvcid": "4420" 00:10:57.312 } 00:10:57.312 ], 00:10:57.312 "allow_any_host": true, 00:10:57.312 "hosts": [], 00:10:57.312 "serial_number": "SPDK00000000000002", 00:10:57.312 "model_number": "SPDK bdev Controller", 00:10:57.312 "max_namespaces": 32, 00:10:57.312 "min_cntlid": 1, 00:10:57.312 "max_cntlid": 65519, 00:10:57.312 "namespaces": [ 00:10:57.312 { 00:10:57.312 "nsid": 1, 00:10:57.312 "bdev_name": "Null2", 00:10:57.312 "name": "Null2", 00:10:57.312 "nguid": "C99B0D9405E04177AEEC27C022895137", 00:10:57.312 "uuid": "c99b0d94-05e0-4177-aeec-27c022895137" 00:10:57.312 } 00:10:57.312 ] 00:10:57.312 }, 00:10:57.312 { 00:10:57.312 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:57.312 "subtype": "NVMe", 00:10:57.312 "listen_addresses": [ 00:10:57.312 { 00:10:57.312 "trtype": "TCP", 00:10:57.312 "adrfam": "IPv4", 00:10:57.312 "traddr": "10.0.0.2", 00:10:57.312 "trsvcid": "4420" 00:10:57.312 } 00:10:57.312 ], 00:10:57.312 "allow_any_host": true, 00:10:57.312 "hosts": [], 00:10:57.312 "serial_number": "SPDK00000000000003", 00:10:57.312 "model_number": "SPDK bdev Controller", 00:10:57.312 "max_namespaces": 32, 00:10:57.312 "min_cntlid": 1, 00:10:57.312 "max_cntlid": 65519, 00:10:57.312 "namespaces": [ 00:10:57.312 { 00:10:57.312 "nsid": 1, 00:10:57.312 "bdev_name": "Null3", 00:10:57.312 "name": "Null3", 00:10:57.312 "nguid": "A97260A925524FFAAEDEADCAA57F76F7", 00:10:57.312 "uuid": "a97260a9-2552-4ffa-aede-adcaa57f76f7" 00:10:57.312 } 00:10:57.312 ] 00:10:57.312 }, 00:10:57.312 { 00:10:57.312 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:57.312 "subtype": "NVMe", 00:10:57.312 "listen_addresses": [ 00:10:57.312 { 00:10:57.312 "trtype": "TCP", 00:10:57.312 "adrfam": "IPv4", 00:10:57.312 "traddr": "10.0.0.2", 00:10:57.312 "trsvcid": "4420" 00:10:57.312 } 00:10:57.312 ], 00:10:57.312 "allow_any_host": true, 00:10:57.312 "hosts": [], 00:10:57.312 "serial_number": "SPDK00000000000004", 00:10:57.312 "model_number": "SPDK bdev Controller", 00:10:57.312 "max_namespaces": 32, 00:10:57.312 "min_cntlid": 1, 00:10:57.312 "max_cntlid": 65519, 00:10:57.312 "namespaces": [ 00:10:57.312 { 00:10:57.312 "nsid": 1, 00:10:57.312 "bdev_name": "Null4", 00:10:57.312 "name": "Null4", 00:10:57.312 "nguid": "883DF30627BB4D618D77499F466FEAF1", 00:10:57.312 "uuid": "883df306-27bb-4d61-8d77-499f466feaf1" 00:10:57.312 } 00:10:57.312 ] 00:10:57.312 } 00:10:57.312 ] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:57.312 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.313 rmmod nvme_tcp 00:10:57.313 rmmod nvme_fabrics 00:10:57.313 rmmod nvme_keyring 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2408687 ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2408687 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2408687 ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2408687 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2408687 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2408687' 00:10:57.313 killing process with pid 2408687 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2408687 00:10:57.313 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2408687 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.573 17:05:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.586 00:10:59.586 real 0m9.310s 00:10:59.586 user 0m5.450s 00:10:59.586 sys 0m4.813s 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:59.586 ************************************ 00:10:59.586 END TEST nvmf_target_discovery 00:10:59.586 ************************************ 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.586 17:05:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.846 ************************************ 00:10:59.846 START TEST nvmf_referrals 00:10:59.846 ************************************ 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:59.846 * Looking for test storage... 00:10:59.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.846 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.847 --rc genhtml_branch_coverage=1 00:10:59.847 --rc genhtml_function_coverage=1 00:10:59.847 --rc genhtml_legend=1 00:10:59.847 --rc geninfo_all_blocks=1 00:10:59.847 --rc geninfo_unexecuted_blocks=1 00:10:59.847 00:10:59.847 ' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.847 --rc genhtml_branch_coverage=1 00:10:59.847 --rc genhtml_function_coverage=1 00:10:59.847 --rc genhtml_legend=1 00:10:59.847 --rc geninfo_all_blocks=1 00:10:59.847 --rc geninfo_unexecuted_blocks=1 00:10:59.847 00:10:59.847 ' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.847 --rc genhtml_branch_coverage=1 00:10:59.847 --rc genhtml_function_coverage=1 00:10:59.847 --rc genhtml_legend=1 00:10:59.847 --rc geninfo_all_blocks=1 00:10:59.847 --rc geninfo_unexecuted_blocks=1 00:10:59.847 00:10:59.847 ' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.847 --rc genhtml_branch_coverage=1 00:10:59.847 --rc genhtml_function_coverage=1 00:10:59.847 --rc genhtml_legend=1 00:10:59.847 --rc geninfo_all_blocks=1 00:10:59.847 --rc geninfo_unexecuted_blocks=1 00:10:59.847 00:10:59.847 ' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.847 17:05:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:06.421 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:06.421 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:06.421 Found net devices under 0000:86:00.0: cvl_0_0 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:06.421 Found net devices under 0000:86:00.1: cvl_0_1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.421 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:11:06.421 00:11:06.421 --- 10.0.0.2 ping statistics --- 00:11:06.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.421 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:06.422 00:11:06.422 --- 10.0.0.1 ping statistics --- 00:11:06.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.422 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2412471 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2412471 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2412471 ']' 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.422 17:05:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 [2024-11-20 17:05:23.936618] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:11:06.422 [2024-11-20 17:05:23.936670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.422 [2024-11-20 17:05:24.013927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.422 [2024-11-20 17:05:24.055489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.422 [2024-11-20 17:05:24.055527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.422 [2024-11-20 17:05:24.055534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.422 [2024-11-20 17:05:24.055541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.422 [2024-11-20 17:05:24.055545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.422 [2024-11-20 17:05:24.057137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.422 [2024-11-20 17:05:24.057257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.422 [2024-11-20 17:05:24.057299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.422 [2024-11-20 17:05:24.057300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 [2024-11-20 17:05:24.207701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 [2024-11-20 17:05:24.243411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.422 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.681 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.682 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.941 17:05:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:07.200 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.201 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.201 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.459 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.459 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.459 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.459 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.459 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.460 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.718 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:07.718 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.718 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.719 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.977 17:05:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.236 rmmod nvme_tcp 00:11:08.236 rmmod nvme_fabrics 00:11:08.236 rmmod nvme_keyring 00:11:08.236 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2412471 ']' 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2412471 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2412471 ']' 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2412471 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412471 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412471' 00:11:08.237 killing process with pid 2412471 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2412471 00:11:08.237 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2412471 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.496 17:05:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.035 00:11:11.035 real 0m10.815s 00:11:11.035 user 0m11.706s 00:11:11.035 sys 0m5.345s 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.035 ************************************ 00:11:11.035 END TEST nvmf_referrals 00:11:11.035 ************************************ 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.035 ************************************ 00:11:11.035 START TEST nvmf_connect_disconnect 00:11:11.035 ************************************ 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.035 * Looking for test storage... 00:11:11.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.035 --rc genhtml_branch_coverage=1 00:11:11.035 --rc genhtml_function_coverage=1 00:11:11.035 --rc genhtml_legend=1 00:11:11.035 --rc geninfo_all_blocks=1 00:11:11.035 --rc geninfo_unexecuted_blocks=1 00:11:11.035 00:11:11.035 ' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.035 --rc genhtml_branch_coverage=1 00:11:11.035 --rc genhtml_function_coverage=1 00:11:11.035 --rc genhtml_legend=1 00:11:11.035 --rc geninfo_all_blocks=1 00:11:11.035 --rc geninfo_unexecuted_blocks=1 00:11:11.035 00:11:11.035 ' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.035 --rc genhtml_branch_coverage=1 00:11:11.035 --rc genhtml_function_coverage=1 00:11:11.035 --rc genhtml_legend=1 00:11:11.035 --rc geninfo_all_blocks=1 00:11:11.035 --rc geninfo_unexecuted_blocks=1 00:11:11.035 00:11:11.035 ' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.035 --rc genhtml_branch_coverage=1 00:11:11.035 --rc genhtml_function_coverage=1 00:11:11.035 --rc genhtml_legend=1 00:11:11.035 --rc geninfo_all_blocks=1 00:11:11.035 --rc geninfo_unexecuted_blocks=1 00:11:11.035 00:11:11.035 ' 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.035 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.036 17:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.607 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:17.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:17.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:17.608 Found net devices under 0000:86:00.0: cvl_0_0 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:17.608 Found net devices under 0000:86:00.1: cvl_0_1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:11:17.608 00:11:17.608 --- 10.0.0.2 ping statistics --- 00:11:17.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.608 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:11:17.608 00:11:17.608 --- 10.0.0.1 ping statistics --- 00:11:17.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.608 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:17.608 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2416350 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2416350 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2416350 ']' 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.609 17:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.609 [2024-11-20 17:05:34.771495] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:11:17.609 [2024-11-20 17:05:34.771538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.609 [2024-11-20 17:05:34.851027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.609 [2024-11-20 17:05:34.892552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.609 [2024-11-20 17:05:34.892591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.609 [2024-11-20 17:05:34.892599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.609 [2024-11-20 17:05:34.892605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.609 [2024-11-20 17:05:34.892610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.609 [2024-11-20 17:05:34.894120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.609 [2024-11-20 17:05:34.894250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.609 [2024-11-20 17:05:34.894296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.609 [2024-11-20 17:05:34.894306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.609 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.609 [2024-11-20 17:05:35.646045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 [2024-11-20 17:05:35.714733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:17.870 17:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:21.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.300 rmmod nvme_tcp 00:11:34.300 rmmod nvme_fabrics 00:11:34.300 rmmod nvme_keyring 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2416350 ']' 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2416350 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2416350 ']' 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2416350 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416350 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416350' 00:11:34.300 killing process with pid 2416350 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2416350 00:11:34.300 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2416350 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.560 17:05:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.097 00:11:37.097 real 0m26.050s 00:11:37.097 user 1m11.687s 00:11:37.097 sys 0m5.896s 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:37.097 ************************************ 00:11:37.097 END TEST nvmf_connect_disconnect 00:11:37.097 ************************************ 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.097 ************************************ 00:11:37.097 START TEST nvmf_multitarget 00:11:37.097 ************************************ 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:37.097 * Looking for test storage... 00:11:37.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.097 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:37.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.098 --rc genhtml_branch_coverage=1 00:11:37.098 --rc genhtml_function_coverage=1 00:11:37.098 --rc genhtml_legend=1 00:11:37.098 --rc geninfo_all_blocks=1 00:11:37.098 --rc geninfo_unexecuted_blocks=1 00:11:37.098 00:11:37.098 ' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:37.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.098 --rc genhtml_branch_coverage=1 00:11:37.098 --rc genhtml_function_coverage=1 00:11:37.098 --rc genhtml_legend=1 00:11:37.098 --rc geninfo_all_blocks=1 00:11:37.098 --rc geninfo_unexecuted_blocks=1 00:11:37.098 00:11:37.098 ' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:37.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.098 --rc genhtml_branch_coverage=1 00:11:37.098 --rc genhtml_function_coverage=1 00:11:37.098 --rc genhtml_legend=1 00:11:37.098 --rc geninfo_all_blocks=1 00:11:37.098 --rc geninfo_unexecuted_blocks=1 00:11:37.098 00:11:37.098 ' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:37.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.098 --rc genhtml_branch_coverage=1 00:11:37.098 --rc genhtml_function_coverage=1 00:11:37.098 --rc genhtml_legend=1 00:11:37.098 --rc geninfo_all_blocks=1 00:11:37.098 --rc geninfo_unexecuted_blocks=1 00:11:37.098 00:11:37.098 ' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.098 17:05:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.670 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.670 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.670 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.670 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:43.671 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:43.671 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:43.671 Found net devices under 0000:86:00.0: cvl_0_0 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:43.671 Found net devices under 0000:86:00.1: cvl_0_1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:11:43.671 00:11:43.671 --- 10.0.0.2 ping statistics --- 00:11:43.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.671 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:11:43.671 00:11:43.671 --- 10.0.0.1 ping statistics --- 00:11:43.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.671 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.671 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2423010 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2423010 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2423010 ']' 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.672 17:06:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 [2024-11-20 17:06:00.872255] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:11:43.672 [2024-11-20 17:06:00.872311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.672 [2024-11-20 17:06:00.951817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.672 [2024-11-20 17:06:00.994721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.672 [2024-11-20 17:06:00.994757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.672 [2024-11-20 17:06:00.994764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.672 [2024-11-20 17:06:00.994770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.672 [2024-11-20 17:06:00.994775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.672 [2024-11-20 17:06:00.996229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.672 [2024-11-20 17:06:00.996339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.672 [2024-11-20 17:06:00.996444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.672 [2024-11-20 17:06:00.996445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:43.672 "nvmf_tgt_1" 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:43.672 "nvmf_tgt_2" 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:43.672 true 00:11:43.672 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:43.931 true 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.931 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.931 rmmod nvme_tcp 00:11:43.931 rmmod nvme_fabrics 00:11:43.931 rmmod nvme_keyring 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2423010 ']' 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2423010 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2423010 ']' 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2423010 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.191 17:06:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2423010 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2423010' 00:11:44.191 killing process with pid 2423010 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2423010 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2423010 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.191 17:06:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.727 00:11:46.727 real 0m9.622s 00:11:46.727 user 0m7.343s 00:11:46.727 sys 0m4.887s 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:46.727 ************************************ 00:11:46.727 END TEST nvmf_multitarget 00:11:46.727 ************************************ 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.727 ************************************ 00:11:46.727 START TEST nvmf_rpc 00:11:46.727 ************************************ 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.727 * Looking for test storage... 00:11:46.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.727 --rc genhtml_branch_coverage=1 00:11:46.727 --rc genhtml_function_coverage=1 00:11:46.727 --rc genhtml_legend=1 00:11:46.727 --rc geninfo_all_blocks=1 00:11:46.727 --rc geninfo_unexecuted_blocks=1 00:11:46.727 00:11:46.727 ' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.727 --rc genhtml_branch_coverage=1 00:11:46.727 --rc genhtml_function_coverage=1 00:11:46.727 --rc genhtml_legend=1 00:11:46.727 --rc geninfo_all_blocks=1 00:11:46.727 --rc geninfo_unexecuted_blocks=1 00:11:46.727 00:11:46.727 ' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.727 --rc genhtml_branch_coverage=1 00:11:46.727 --rc genhtml_function_coverage=1 00:11:46.727 --rc genhtml_legend=1 00:11:46.727 --rc geninfo_all_blocks=1 00:11:46.727 --rc geninfo_unexecuted_blocks=1 00:11:46.727 00:11:46.727 ' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.727 --rc genhtml_branch_coverage=1 00:11:46.727 --rc genhtml_function_coverage=1 00:11:46.727 --rc genhtml_legend=1 00:11:46.727 --rc geninfo_all_blocks=1 00:11:46.727 --rc geninfo_unexecuted_blocks=1 00:11:46.727 00:11:46.727 ' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.727 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.728 17:06:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:53.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:53.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:53.295 Found net devices under 0000:86:00.0: cvl_0_0 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:53.295 Found net devices under 0000:86:00.1: cvl_0_1 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.295 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:11:53.296 00:11:53.296 --- 10.0.0.2 ping statistics --- 00:11:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.296 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:11:53.296 00:11:53.296 --- 10.0.0.1 ping statistics --- 00:11:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.296 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2427244 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2427244 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2427244 ']' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 [2024-11-20 17:06:10.655618] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:11:53.296 [2024-11-20 17:06:10.655667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.296 [2024-11-20 17:06:10.735541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.296 [2024-11-20 17:06:10.778289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.296 [2024-11-20 17:06:10.778325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.296 [2024-11-20 17:06:10.778334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.296 [2024-11-20 17:06:10.778342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.296 [2024-11-20 17:06:10.778348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.296 [2024-11-20 17:06:10.779901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.296 [2024-11-20 17:06:10.780009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.296 [2024-11-20 17:06:10.780118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.296 [2024-11-20 17:06:10.780118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:53.296 "tick_rate": 2100000000, 00:11:53.296 "poll_groups": [ 00:11:53.296 { 00:11:53.296 "name": "nvmf_tgt_poll_group_000", 00:11:53.296 "admin_qpairs": 0, 00:11:53.296 "io_qpairs": 0, 00:11:53.296 "current_admin_qpairs": 0, 00:11:53.296 "current_io_qpairs": 0, 00:11:53.296 "pending_bdev_io": 0, 00:11:53.296 "completed_nvme_io": 0, 00:11:53.296 "transports": [] 00:11:53.296 }, 00:11:53.296 { 00:11:53.296 "name": "nvmf_tgt_poll_group_001", 00:11:53.296 "admin_qpairs": 0, 00:11:53.296 "io_qpairs": 0, 00:11:53.296 "current_admin_qpairs": 0, 00:11:53.296 "current_io_qpairs": 0, 00:11:53.296 "pending_bdev_io": 0, 00:11:53.296 "completed_nvme_io": 0, 00:11:53.296 "transports": [] 00:11:53.296 }, 00:11:53.296 { 00:11:53.296 "name": "nvmf_tgt_poll_group_002", 00:11:53.296 "admin_qpairs": 0, 00:11:53.296 "io_qpairs": 0, 00:11:53.296 "current_admin_qpairs": 0, 00:11:53.296 "current_io_qpairs": 0, 00:11:53.296 "pending_bdev_io": 0, 00:11:53.296 "completed_nvme_io": 0, 00:11:53.296 "transports": [] 00:11:53.296 }, 00:11:53.296 { 00:11:53.296 "name": "nvmf_tgt_poll_group_003", 00:11:53.296 "admin_qpairs": 0, 00:11:53.296 "io_qpairs": 0, 00:11:53.296 "current_admin_qpairs": 0, 00:11:53.296 "current_io_qpairs": 0, 00:11:53.296 "pending_bdev_io": 0, 00:11:53.296 "completed_nvme_io": 0, 00:11:53.296 "transports": [] 00:11:53.296 } 00:11:53.296 ] 00:11:53.296 }' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:53.296 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:53.297 17:06:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 [2024-11-20 17:06:11.026626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:53.297 "tick_rate": 2100000000, 00:11:53.297 "poll_groups": [ 00:11:53.297 { 00:11:53.297 "name": "nvmf_tgt_poll_group_000", 00:11:53.297 "admin_qpairs": 0, 00:11:53.297 "io_qpairs": 0, 00:11:53.297 "current_admin_qpairs": 0, 00:11:53.297 "current_io_qpairs": 0, 00:11:53.297 "pending_bdev_io": 0, 00:11:53.297 "completed_nvme_io": 0, 00:11:53.297 "transports": [ 00:11:53.297 { 00:11:53.297 "trtype": "TCP" 00:11:53.297 } 00:11:53.297 ] 00:11:53.297 }, 00:11:53.297 { 00:11:53.297 "name": "nvmf_tgt_poll_group_001", 00:11:53.297 "admin_qpairs": 0, 00:11:53.297 "io_qpairs": 0, 00:11:53.297 "current_admin_qpairs": 0, 00:11:53.297 "current_io_qpairs": 0, 00:11:53.297 "pending_bdev_io": 0, 00:11:53.297 "completed_nvme_io": 0, 00:11:53.297 "transports": [ 00:11:53.297 { 00:11:53.297 "trtype": "TCP" 00:11:53.297 } 00:11:53.297 ] 00:11:53.297 }, 00:11:53.297 { 00:11:53.297 "name": "nvmf_tgt_poll_group_002", 00:11:53.297 "admin_qpairs": 0, 00:11:53.297 "io_qpairs": 0, 00:11:53.297 "current_admin_qpairs": 0, 00:11:53.297 "current_io_qpairs": 0, 00:11:53.297 "pending_bdev_io": 0, 00:11:53.297 "completed_nvme_io": 0, 00:11:53.297 "transports": [ 00:11:53.297 { 00:11:53.297 "trtype": "TCP" 00:11:53.297 } 00:11:53.297 ] 00:11:53.297 }, 00:11:53.297 { 00:11:53.297 "name": "nvmf_tgt_poll_group_003", 00:11:53.297 "admin_qpairs": 0, 00:11:53.297 "io_qpairs": 0, 00:11:53.297 "current_admin_qpairs": 0, 00:11:53.297 "current_io_qpairs": 0, 00:11:53.297 "pending_bdev_io": 0, 00:11:53.297 "completed_nvme_io": 0, 00:11:53.297 "transports": [ 00:11:53.297 { 00:11:53.297 "trtype": "TCP" 00:11:53.297 } 00:11:53.297 ] 00:11:53.297 } 00:11:53.297 ] 00:11:53.297 }' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 Malloc1 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.297 [2024-11-20 17:06:11.202902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:53.297 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:53.298 [2024-11-20 17:06:11.231540] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:53.298 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:53.298 could not add new controller: failed to write to nvme-fabrics device 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.298 17:06:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.672 17:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.672 17:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.672 17:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.672 17:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.672 17:06:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.573 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.574 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.832 [2024-11-20 17:06:14.635917] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:11:56.832 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:56.832 could not add new controller: failed to write to nvme-fabrics device 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.832 17:06:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.207 17:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.207 17:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:58.207 17:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.207 17:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:58.207 17:06:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.113 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.114 17:06:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 [2024-11-20 17:06:18.005885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.114 17:06:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.490 17:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.490 17:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.490 17:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.490 17:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.490 17:06:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 [2024-11-20 17:06:21.406066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.398 17:06:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.776 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.776 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.776 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.776 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:04.776 17:06:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.779 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.780 [2024-11-20 17:06:24.719435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.780 17:06:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.158 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.158 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:08.158 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.159 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:08.159 17:06:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 [2024-11-20 17:06:28.016044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.188 17:06:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.569 17:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.569 17:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.569 17:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.569 17:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:11.569 17:06:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 [2024-11-20 17:06:31.446303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.477 17:06:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.855 17:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.855 17:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.855 17:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.855 17:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.855 17:06:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 [2024-11-20 17:06:34.726587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 [2024-11-20 17:06:34.774685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.756 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 [2024-11-20 17:06:34.822809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.015 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 [2024-11-20 17:06:34.870984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 [2024-11-20 17:06:34.919151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:17.016 "tick_rate": 2100000000, 00:12:17.016 "poll_groups": [ 00:12:17.016 { 00:12:17.016 "name": "nvmf_tgt_poll_group_000", 00:12:17.016 "admin_qpairs": 2, 00:12:17.016 "io_qpairs": 168, 00:12:17.016 "current_admin_qpairs": 0, 00:12:17.016 "current_io_qpairs": 0, 00:12:17.016 "pending_bdev_io": 0, 00:12:17.016 "completed_nvme_io": 218, 00:12:17.016 "transports": [ 00:12:17.016 { 00:12:17.016 "trtype": "TCP" 00:12:17.016 } 00:12:17.016 ] 00:12:17.016 }, 00:12:17.016 { 00:12:17.016 "name": "nvmf_tgt_poll_group_001", 00:12:17.016 "admin_qpairs": 2, 00:12:17.016 "io_qpairs": 168, 00:12:17.016 "current_admin_qpairs": 0, 00:12:17.016 "current_io_qpairs": 0, 00:12:17.016 "pending_bdev_io": 0, 00:12:17.016 "completed_nvme_io": 268, 00:12:17.016 "transports": [ 00:12:17.016 { 00:12:17.016 "trtype": "TCP" 00:12:17.016 } 00:12:17.016 ] 00:12:17.016 }, 00:12:17.016 { 00:12:17.016 "name": "nvmf_tgt_poll_group_002", 00:12:17.016 "admin_qpairs": 1, 00:12:17.016 "io_qpairs": 168, 00:12:17.016 "current_admin_qpairs": 0, 00:12:17.016 "current_io_qpairs": 0, 00:12:17.016 "pending_bdev_io": 0, 00:12:17.016 "completed_nvme_io": 268, 00:12:17.016 "transports": [ 00:12:17.016 { 00:12:17.016 "trtype": "TCP" 00:12:17.016 } 00:12:17.016 ] 00:12:17.016 }, 00:12:17.016 { 00:12:17.016 "name": "nvmf_tgt_poll_group_003", 00:12:17.016 "admin_qpairs": 2, 00:12:17.016 "io_qpairs": 168, 00:12:17.016 "current_admin_qpairs": 0, 00:12:17.016 "current_io_qpairs": 0, 00:12:17.016 "pending_bdev_io": 0, 00:12:17.016 "completed_nvme_io": 268, 00:12:17.016 "transports": [ 00:12:17.016 { 00:12:17.016 "trtype": "TCP" 00:12:17.016 } 00:12:17.016 ] 00:12:17.016 } 00:12:17.016 ] 00:12:17.016 }' 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:17.016 17:06:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.016 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:17.016 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:17.016 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:17.016 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:17.016 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.275 rmmod nvme_tcp 00:12:17.275 rmmod nvme_fabrics 00:12:17.275 rmmod nvme_keyring 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2427244 ']' 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2427244 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2427244 ']' 00:12:17.275 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2427244 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2427244 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2427244' 00:12:17.276 killing process with pid 2427244 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2427244 00:12:17.276 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2427244 00:12:17.534 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.534 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.534 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.534 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.535 17:06:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.440 00:12:19.440 real 0m33.098s 00:12:19.440 user 1m39.672s 00:12:19.440 sys 0m6.594s 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.440 ************************************ 00:12:19.440 END TEST nvmf_rpc 00:12:19.440 ************************************ 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.440 17:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.700 ************************************ 00:12:19.700 START TEST nvmf_invalid 00:12:19.700 ************************************ 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.700 * Looking for test storage... 00:12:19.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:19.700 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.701 --rc genhtml_branch_coverage=1 00:12:19.701 --rc genhtml_function_coverage=1 00:12:19.701 --rc genhtml_legend=1 00:12:19.701 --rc geninfo_all_blocks=1 00:12:19.701 --rc geninfo_unexecuted_blocks=1 00:12:19.701 00:12:19.701 ' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.701 --rc genhtml_branch_coverage=1 00:12:19.701 --rc genhtml_function_coverage=1 00:12:19.701 --rc genhtml_legend=1 00:12:19.701 --rc geninfo_all_blocks=1 00:12:19.701 --rc geninfo_unexecuted_blocks=1 00:12:19.701 00:12:19.701 ' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.701 --rc genhtml_branch_coverage=1 00:12:19.701 --rc genhtml_function_coverage=1 00:12:19.701 --rc genhtml_legend=1 00:12:19.701 --rc geninfo_all_blocks=1 00:12:19.701 --rc geninfo_unexecuted_blocks=1 00:12:19.701 00:12:19.701 ' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.701 --rc genhtml_branch_coverage=1 00:12:19.701 --rc genhtml_function_coverage=1 00:12:19.701 --rc genhtml_legend=1 00:12:19.701 --rc geninfo_all_blocks=1 00:12:19.701 --rc geninfo_unexecuted_blocks=1 00:12:19.701 00:12:19.701 ' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.701 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.961 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.961 17:06:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.529 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.530 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.530 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:12:26.530 00:12:26.530 --- 10.0.0.2 ping statistics --- 00:12:26.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.530 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:12:26.530 00:12:26.530 --- 10.0.0.1 ping statistics --- 00:12:26.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.530 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2434866 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2434866 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2434866 ']' 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.530 17:06:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.530 [2024-11-20 17:06:43.769984] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:12:26.530 [2024-11-20 17:06:43.770038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.530 [2024-11-20 17:06:43.853423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.530 [2024-11-20 17:06:43.895116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.530 [2024-11-20 17:06:43.895156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.531 [2024-11-20 17:06:43.895165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.531 [2024-11-20 17:06:43.895171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.531 [2024-11-20 17:06:43.895175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.531 [2024-11-20 17:06:43.896716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.531 [2024-11-20 17:06:43.896826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.531 [2024-11-20 17:06:43.896933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.531 [2024-11-20 17:06:43.896934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.789 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15025 00:12:26.789 [2024-11-20 17:06:44.813449] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:27.049 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:27.049 { 00:12:27.049 "nqn": "nqn.2016-06.io.spdk:cnode15025", 00:12:27.049 "tgt_name": "foobar", 00:12:27.049 "method": "nvmf_create_subsystem", 00:12:27.049 "req_id": 1 00:12:27.049 } 00:12:27.049 Got JSON-RPC error response 00:12:27.049 response: 00:12:27.049 { 00:12:27.049 "code": -32603, 00:12:27.049 "message": "Unable to find target foobar" 00:12:27.049 }' 00:12:27.049 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:27.049 { 00:12:27.049 "nqn": "nqn.2016-06.io.spdk:cnode15025", 00:12:27.049 "tgt_name": "foobar", 00:12:27.049 "method": "nvmf_create_subsystem", 00:12:27.049 "req_id": 1 00:12:27.049 } 00:12:27.049 Got JSON-RPC error response 00:12:27.049 response: 00:12:27.049 { 00:12:27.049 "code": -32603, 00:12:27.049 "message": "Unable to find target foobar" 00:12:27.049 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:27.049 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:27.049 17:06:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24279 00:12:27.049 [2024-11-20 17:06:45.018183] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24279: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:27.049 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:27.049 { 00:12:27.049 "nqn": "nqn.2016-06.io.spdk:cnode24279", 00:12:27.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:27.049 "method": "nvmf_create_subsystem", 00:12:27.049 "req_id": 1 00:12:27.049 } 00:12:27.049 Got JSON-RPC error response 00:12:27.049 response: 00:12:27.049 { 00:12:27.049 "code": -32602, 00:12:27.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:27.049 }' 00:12:27.049 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:27.049 { 00:12:27.049 "nqn": "nqn.2016-06.io.spdk:cnode24279", 00:12:27.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:27.049 "method": "nvmf_create_subsystem", 00:12:27.049 "req_id": 1 00:12:27.049 } 00:12:27.049 Got JSON-RPC error response 00:12:27.049 response: 00:12:27.049 { 00:12:27.049 "code": -32602, 00:12:27.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:27.049 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:27.049 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:27.049 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25844 00:12:27.308 [2024-11-20 17:06:45.222830] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25844: invalid model number 'SPDK_Controller' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:27.308 { 00:12:27.308 "nqn": "nqn.2016-06.io.spdk:cnode25844", 00:12:27.308 "model_number": "SPDK_Controller\u001f", 00:12:27.308 "method": "nvmf_create_subsystem", 00:12:27.308 "req_id": 1 00:12:27.308 } 00:12:27.308 Got JSON-RPC error response 00:12:27.308 response: 00:12:27.308 { 00:12:27.308 "code": -32602, 00:12:27.308 "message": "Invalid MN SPDK_Controller\u001f" 00:12:27.308 }' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:27.308 { 00:12:27.308 "nqn": "nqn.2016-06.io.spdk:cnode25844", 00:12:27.308 "model_number": "SPDK_Controller\u001f", 00:12:27.308 "method": "nvmf_create_subsystem", 00:12:27.308 "req_id": 1 00:12:27.308 } 00:12:27.308 Got JSON-RPC error response 00:12:27.308 response: 00:12:27.308 { 00:12:27.308 "code": -32602, 00:12:27.308 "message": "Invalid MN SPDK_Controller\u001f" 00:12:27.308 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.308 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:27.567 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|8mn-5lm_35F_ElvS)P\k' 00:12:27.568 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '|8mn-5lm_35F_ElvS)P\k' nqn.2016-06.io.spdk:cnode6222 00:12:27.568 [2024-11-20 17:06:45.575973] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6222: invalid serial number '|8mn-5lm_35F_ElvS)P\k' 00:12:27.827 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:27.827 { 00:12:27.827 "nqn": "nqn.2016-06.io.spdk:cnode6222", 00:12:27.827 "serial_number": "|8mn-5lm_35F_ElvS)P\\k", 00:12:27.827 "method": "nvmf_create_subsystem", 00:12:27.827 "req_id": 1 00:12:27.827 } 00:12:27.827 Got JSON-RPC error response 00:12:27.827 response: 00:12:27.827 { 00:12:27.827 "code": -32602, 00:12:27.827 "message": "Invalid SN |8mn-5lm_35F_ElvS)P\\k" 00:12:27.827 }' 00:12:27.827 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:27.827 { 00:12:27.827 "nqn": "nqn.2016-06.io.spdk:cnode6222", 00:12:27.827 "serial_number": "|8mn-5lm_35F_ElvS)P\\k", 00:12:27.827 "method": "nvmf_create_subsystem", 00:12:27.827 "req_id": 1 00:12:27.827 } 00:12:27.827 Got JSON-RPC error response 00:12:27.827 response: 00:12:27.827 { 00:12:27.827 "code": -32602, 00:12:27.827 "message": "Invalid SN |8mn-5lm_35F_ElvS)P\\k" 00:12:27.827 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:27.827 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:27.827 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.828 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:27.829 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.830 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.088 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm' 00:12:28.089 17:06:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm' nqn.2016-06.io.spdk:cnode853 00:12:28.089 [2024-11-20 17:06:46.053572] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode853: invalid model number 'gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm' 00:12:28.089 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:28.089 { 00:12:28.089 "nqn": "nqn.2016-06.io.spdk:cnode853", 00:12:28.089 "model_number": "gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm", 00:12:28.089 "method": "nvmf_create_subsystem", 00:12:28.089 "req_id": 1 00:12:28.089 } 00:12:28.089 Got JSON-RPC error response 00:12:28.089 response: 00:12:28.089 { 00:12:28.089 "code": -32602, 00:12:28.089 "message": "Invalid MN gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm" 00:12:28.089 }' 00:12:28.089 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:28.089 { 00:12:28.089 "nqn": "nqn.2016-06.io.spdk:cnode853", 00:12:28.089 "model_number": "gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm", 00:12:28.089 "method": "nvmf_create_subsystem", 00:12:28.089 "req_id": 1 00:12:28.089 } 00:12:28.089 Got JSON-RPC error response 00:12:28.089 response: 00:12:28.089 { 00:12:28.089 "code": -32602, 00:12:28.089 "message": "Invalid MN gNECTC:t(pMgO8#v WZgcq ey!k;,D-0Yl`3Ma?xm" 00:12:28.089 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:28.089 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:28.347 [2024-11-20 17:06:46.250310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.347 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:28.606 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:28.606 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:28.606 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:28.606 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:28.606 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:28.864 [2024-11-20 17:06:46.651607] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:28.864 { 00:12:28.864 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.864 "listen_address": { 00:12:28.864 "trtype": "tcp", 00:12:28.864 "traddr": "", 00:12:28.864 "trsvcid": "4421" 00:12:28.864 }, 00:12:28.864 "method": "nvmf_subsystem_remove_listener", 00:12:28.864 "req_id": 1 00:12:28.864 } 00:12:28.864 Got JSON-RPC error response 00:12:28.864 response: 00:12:28.864 { 00:12:28.864 "code": -32602, 00:12:28.864 "message": "Invalid parameters" 00:12:28.864 }' 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:28.864 { 00:12:28.864 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:28.864 "listen_address": { 00:12:28.864 "trtype": "tcp", 00:12:28.864 "traddr": "", 00:12:28.864 "trsvcid": "4421" 00:12:28.864 }, 00:12:28.864 "method": "nvmf_subsystem_remove_listener", 00:12:28.864 "req_id": 1 00:12:28.864 } 00:12:28.864 Got JSON-RPC error response 00:12:28.864 response: 00:12:28.864 { 00:12:28.864 "code": -32602, 00:12:28.864 "message": "Invalid parameters" 00:12:28.864 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25582 -i 0 00:12:28.864 [2024-11-20 17:06:46.860263] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25582: invalid cntlid range [0-65519] 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:28.864 { 00:12:28.864 "nqn": "nqn.2016-06.io.spdk:cnode25582", 00:12:28.864 "min_cntlid": 0, 00:12:28.864 "method": "nvmf_create_subsystem", 00:12:28.864 "req_id": 1 00:12:28.864 } 00:12:28.864 Got JSON-RPC error response 00:12:28.864 response: 00:12:28.864 { 00:12:28.864 "code": -32602, 00:12:28.864 "message": "Invalid cntlid range [0-65519]" 00:12:28.864 }' 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:28.864 { 00:12:28.864 "nqn": "nqn.2016-06.io.spdk:cnode25582", 00:12:28.864 "min_cntlid": 0, 00:12:28.864 "method": "nvmf_create_subsystem", 00:12:28.864 "req_id": 1 00:12:28.864 } 00:12:28.864 Got JSON-RPC error response 00:12:28.864 response: 00:12:28.864 { 00:12:28.864 "code": -32602, 00:12:28.864 "message": "Invalid cntlid range [0-65519]" 00:12:28.864 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:28.864 17:06:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10363 -i 65520 00:12:29.123 [2024-11-20 17:06:47.080984] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10363: invalid cntlid range [65520-65519] 00:12:29.123 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:29.123 { 00:12:29.123 "nqn": "nqn.2016-06.io.spdk:cnode10363", 00:12:29.123 "min_cntlid": 65520, 00:12:29.123 "method": "nvmf_create_subsystem", 00:12:29.123 "req_id": 1 00:12:29.123 } 00:12:29.123 Got JSON-RPC error response 00:12:29.123 response: 00:12:29.123 { 00:12:29.123 "code": -32602, 00:12:29.123 "message": "Invalid cntlid range [65520-65519]" 00:12:29.123 }' 00:12:29.123 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:29.123 { 00:12:29.123 "nqn": "nqn.2016-06.io.spdk:cnode10363", 00:12:29.123 "min_cntlid": 65520, 00:12:29.123 "method": "nvmf_create_subsystem", 00:12:29.123 "req_id": 1 00:12:29.123 } 00:12:29.123 Got JSON-RPC error response 00:12:29.123 response: 00:12:29.123 { 00:12:29.123 "code": -32602, 00:12:29.123 "message": "Invalid cntlid range [65520-65519]" 00:12:29.123 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.123 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23022 -I 0 00:12:29.382 [2024-11-20 17:06:47.281651] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23022: invalid cntlid range [1-0] 00:12:29.382 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:29.382 { 00:12:29.382 "nqn": "nqn.2016-06.io.spdk:cnode23022", 00:12:29.382 "max_cntlid": 0, 00:12:29.382 "method": "nvmf_create_subsystem", 00:12:29.382 "req_id": 1 00:12:29.382 } 00:12:29.382 Got JSON-RPC error response 00:12:29.382 response: 00:12:29.382 { 00:12:29.382 "code": -32602, 00:12:29.382 "message": "Invalid cntlid range [1-0]" 00:12:29.382 }' 00:12:29.382 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:29.382 { 00:12:29.382 "nqn": "nqn.2016-06.io.spdk:cnode23022", 00:12:29.382 "max_cntlid": 0, 00:12:29.382 "method": "nvmf_create_subsystem", 00:12:29.382 "req_id": 1 00:12:29.382 } 00:12:29.382 Got JSON-RPC error response 00:12:29.382 response: 00:12:29.382 { 00:12:29.382 "code": -32602, 00:12:29.382 "message": "Invalid cntlid range [1-0]" 00:12:29.382 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.382 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29552 -I 65520 00:12:29.641 [2024-11-20 17:06:47.478307] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29552: invalid cntlid range [1-65520] 00:12:29.641 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:29.641 { 00:12:29.641 "nqn": "nqn.2016-06.io.spdk:cnode29552", 00:12:29.641 "max_cntlid": 65520, 00:12:29.641 "method": "nvmf_create_subsystem", 00:12:29.641 "req_id": 1 00:12:29.641 } 00:12:29.641 Got JSON-RPC error response 00:12:29.641 response: 00:12:29.641 { 00:12:29.641 "code": -32602, 00:12:29.641 "message": "Invalid cntlid range [1-65520]" 00:12:29.641 }' 00:12:29.641 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:29.641 { 00:12:29.641 "nqn": "nqn.2016-06.io.spdk:cnode29552", 00:12:29.641 "max_cntlid": 65520, 00:12:29.641 "method": "nvmf_create_subsystem", 00:12:29.641 "req_id": 1 00:12:29.641 } 00:12:29.641 Got JSON-RPC error response 00:12:29.641 response: 00:12:29.641 { 00:12:29.641 "code": -32602, 00:12:29.641 "message": "Invalid cntlid range [1-65520]" 00:12:29.641 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.641 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16294 -i 6 -I 5 00:12:29.900 [2024-11-20 17:06:47.683003] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16294: invalid cntlid range [6-5] 00:12:29.900 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:29.900 { 00:12:29.900 "nqn": "nqn.2016-06.io.spdk:cnode16294", 00:12:29.900 "min_cntlid": 6, 00:12:29.900 "max_cntlid": 5, 00:12:29.900 "method": "nvmf_create_subsystem", 00:12:29.900 "req_id": 1 00:12:29.900 } 00:12:29.900 Got JSON-RPC error response 00:12:29.900 response: 00:12:29.900 { 00:12:29.900 "code": -32602, 00:12:29.900 "message": "Invalid cntlid range [6-5]" 00:12:29.900 }' 00:12:29.900 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:29.900 { 00:12:29.900 "nqn": "nqn.2016-06.io.spdk:cnode16294", 00:12:29.900 "min_cntlid": 6, 00:12:29.900 "max_cntlid": 5, 00:12:29.900 "method": "nvmf_create_subsystem", 00:12:29.900 "req_id": 1 00:12:29.900 } 00:12:29.900 Got JSON-RPC error response 00:12:29.900 response: 00:12:29.900 { 00:12:29.900 "code": -32602, 00:12:29.900 "message": "Invalid cntlid range [6-5]" 00:12:29.900 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:29.900 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:29.900 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:29.900 { 00:12:29.900 "name": "foobar", 00:12:29.900 "method": "nvmf_delete_target", 00:12:29.900 "req_id": 1 00:12:29.901 } 00:12:29.901 Got JSON-RPC error response 00:12:29.901 response: 00:12:29.901 { 00:12:29.901 "code": -32602, 00:12:29.901 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:29.901 }' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:29.901 { 00:12:29.901 "name": "foobar", 00:12:29.901 "method": "nvmf_delete_target", 00:12:29.901 "req_id": 1 00:12:29.901 } 00:12:29.901 Got JSON-RPC error response 00:12:29.901 response: 00:12:29.901 { 00:12:29.901 "code": -32602, 00:12:29.901 "message": "The specified target doesn't exist, cannot delete it." 00:12:29.901 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.901 rmmod nvme_tcp 00:12:29.901 rmmod nvme_fabrics 00:12:29.901 rmmod nvme_keyring 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2434866 ']' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2434866 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2434866 ']' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2434866 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434866 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434866' 00:12:29.901 killing process with pid 2434866 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2434866 00:12:29.901 17:06:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2434866 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.160 17:06:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.697 00:12:32.697 real 0m12.636s 00:12:32.697 user 0m20.998s 00:12:32.697 sys 0m5.567s 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.697 ************************************ 00:12:32.697 END TEST nvmf_invalid 00:12:32.697 ************************************ 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.697 ************************************ 00:12:32.697 START TEST nvmf_connect_stress 00:12:32.697 ************************************ 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:32.697 * Looking for test storage... 00:12:32.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.697 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.698 --rc genhtml_branch_coverage=1 00:12:32.698 --rc genhtml_function_coverage=1 00:12:32.698 --rc genhtml_legend=1 00:12:32.698 --rc geninfo_all_blocks=1 00:12:32.698 --rc geninfo_unexecuted_blocks=1 00:12:32.698 00:12:32.698 ' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.698 --rc genhtml_branch_coverage=1 00:12:32.698 --rc genhtml_function_coverage=1 00:12:32.698 --rc genhtml_legend=1 00:12:32.698 --rc geninfo_all_blocks=1 00:12:32.698 --rc geninfo_unexecuted_blocks=1 00:12:32.698 00:12:32.698 ' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.698 --rc genhtml_branch_coverage=1 00:12:32.698 --rc genhtml_function_coverage=1 00:12:32.698 --rc genhtml_legend=1 00:12:32.698 --rc geninfo_all_blocks=1 00:12:32.698 --rc geninfo_unexecuted_blocks=1 00:12:32.698 00:12:32.698 ' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.698 --rc genhtml_branch_coverage=1 00:12:32.698 --rc genhtml_function_coverage=1 00:12:32.698 --rc genhtml_legend=1 00:12:32.698 --rc geninfo_all_blocks=1 00:12:32.698 --rc geninfo_unexecuted_blocks=1 00:12:32.698 00:12:32.698 ' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.698 17:06:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:39.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.271 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:39.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:39.272 Found net devices under 0000:86:00.0: cvl_0_0 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:39.272 Found net devices under 0000:86:00.1: cvl_0_1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:12:39.272 00:12:39.272 --- 10.0.0.2 ping statistics --- 00:12:39.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.272 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:39.272 00:12:39.272 --- 10.0.0.1 ping statistics --- 00:12:39.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.272 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2439259 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2439259 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2439259 ']' 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.272 17:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 [2024-11-20 17:06:56.406846] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:12:39.272 [2024-11-20 17:06:56.406898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.272 [2024-11-20 17:06:56.487420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.272 [2024-11-20 17:06:56.528699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.272 [2024-11-20 17:06:56.528736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.272 [2024-11-20 17:06:56.528743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.272 [2024-11-20 17:06:56.528749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.272 [2024-11-20 17:06:56.528754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.272 [2024-11-20 17:06:56.530140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.272 [2024-11-20 17:06:56.530249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.272 [2024-11-20 17:06:56.530249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.272 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.273 [2024-11-20 17:06:57.277555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.273 [2024-11-20 17:06:57.293722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.273 NULL1 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2439507 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:39.273 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.532 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.791 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.791 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:39.791 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.791 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.791 17:06:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.050 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.050 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:40.050 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.050 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.050 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.617 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.617 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:40.617 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.617 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.617 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.876 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.876 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:40.876 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.876 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.876 17:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.135 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.135 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:41.135 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.135 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.135 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.394 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.394 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:41.394 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.394 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.394 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.653 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.653 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:41.653 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.653 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.653 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.220 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.220 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:42.220 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.220 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.220 17:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.478 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.478 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:42.478 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.478 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.478 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.736 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.736 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:42.736 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.736 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.736 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.995 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.995 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:42.995 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.995 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.995 17:07:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.254 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.254 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:43.254 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.254 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.254 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.822 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.822 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:43.822 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.822 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.822 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.080 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.080 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:44.080 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.080 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.080 17:07:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.338 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.338 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:44.338 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.338 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.338 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.597 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.597 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:44.597 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.597 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.597 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.855 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.855 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:44.855 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.855 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.855 17:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.437 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.437 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:45.437 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.437 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.437 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.694 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.694 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:45.694 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.694 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.694 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.951 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.951 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:45.951 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.951 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.951 17:07:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.208 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.208 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:46.208 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.208 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.208 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.772 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.772 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:46.772 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.772 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.772 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.029 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.029 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:47.029 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.029 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.029 17:07:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.286 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.286 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:47.286 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.286 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.286 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.544 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.544 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:47.544 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.544 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.544 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.802 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.802 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:47.802 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.802 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.802 17:07:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.369 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.369 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:48.369 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.369 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.369 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.627 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.627 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:48.627 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.627 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.627 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.885 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.885 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:48.885 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.885 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.885 17:07:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.144 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.144 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:49.144 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.144 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.144 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.710 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.710 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:49.710 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.710 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.710 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.710 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2439507 00:12:49.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2439507) - No such process 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2439507 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.969 rmmod nvme_tcp 00:12:49.969 rmmod nvme_fabrics 00:12:49.969 rmmod nvme_keyring 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2439259 ']' 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2439259 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2439259 ']' 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2439259 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2439259 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2439259' 00:12:49.969 killing process with pid 2439259 00:12:49.969 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2439259 00:12:49.970 17:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2439259 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.229 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.230 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.230 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.230 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.230 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.230 17:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.136 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.136 00:12:52.136 real 0m19.905s 00:12:52.136 user 0m42.166s 00:12:52.136 sys 0m8.687s 00:12:52.136 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.136 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.136 ************************************ 00:12:52.136 END TEST nvmf_connect_stress 00:12:52.136 ************************************ 00:12:52.137 17:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:52.137 17:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.137 17:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.137 17:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.397 ************************************ 00:12:52.397 START TEST nvmf_fused_ordering 00:12:52.397 ************************************ 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:52.397 * Looking for test storage... 00:12:52.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.397 --rc genhtml_branch_coverage=1 00:12:52.397 --rc genhtml_function_coverage=1 00:12:52.397 --rc genhtml_legend=1 00:12:52.397 --rc geninfo_all_blocks=1 00:12:52.397 --rc geninfo_unexecuted_blocks=1 00:12:52.397 00:12:52.397 ' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.397 --rc genhtml_branch_coverage=1 00:12:52.397 --rc genhtml_function_coverage=1 00:12:52.397 --rc genhtml_legend=1 00:12:52.397 --rc geninfo_all_blocks=1 00:12:52.397 --rc geninfo_unexecuted_blocks=1 00:12:52.397 00:12:52.397 ' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.397 --rc genhtml_branch_coverage=1 00:12:52.397 --rc genhtml_function_coverage=1 00:12:52.397 --rc genhtml_legend=1 00:12:52.397 --rc geninfo_all_blocks=1 00:12:52.397 --rc geninfo_unexecuted_blocks=1 00:12:52.397 00:12:52.397 ' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.397 --rc genhtml_branch_coverage=1 00:12:52.397 --rc genhtml_function_coverage=1 00:12:52.397 --rc genhtml_legend=1 00:12:52.397 --rc geninfo_all_blocks=1 00:12:52.397 --rc geninfo_unexecuted_blocks=1 00:12:52.397 00:12:52.397 ' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.397 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.398 17:07:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.972 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:58.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:58.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:58.973 Found net devices under 0000:86:00.0: cvl_0_0 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:58.973 Found net devices under 0000:86:00.1: cvl_0_1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:12:58.973 00:12:58.973 --- 10.0.0.2 ping statistics --- 00:12:58.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.973 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:12:58.973 00:12:58.973 --- 10.0.0.1 ping statistics --- 00:12:58.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.973 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2444714 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2444714 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2444714 ']' 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.973 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.973 [2024-11-20 17:07:16.457564] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:12:58.973 [2024-11-20 17:07:16.457609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.973 [2024-11-20 17:07:16.536686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.973 [2024-11-20 17:07:16.575266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.973 [2024-11-20 17:07:16.575302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.973 [2024-11-20 17:07:16.575310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.973 [2024-11-20 17:07:16.575317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.974 [2024-11-20 17:07:16.575322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.974 [2024-11-20 17:07:16.575841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 [2024-11-20 17:07:16.719422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 [2024-11-20 17:07:16.739642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 NULL1 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.974 17:07:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:58.974 [2024-11-20 17:07:16.797252] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:12:58.974 [2024-11-20 17:07:16.797284] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444900 ] 00:12:59.233 Attached to nqn.2016-06.io.spdk:cnode1 00:12:59.233 Namespace ID: 1 size: 1GB 00:12:59.233 fused_ordering(0) 00:12:59.233 fused_ordering(1) 00:12:59.233 fused_ordering(2) 00:12:59.233 fused_ordering(3) 00:12:59.233 fused_ordering(4) 00:12:59.233 fused_ordering(5) 00:12:59.233 fused_ordering(6) 00:12:59.233 fused_ordering(7) 00:12:59.233 fused_ordering(8) 00:12:59.233 fused_ordering(9) 00:12:59.233 fused_ordering(10) 00:12:59.233 fused_ordering(11) 00:12:59.233 fused_ordering(12) 00:12:59.233 fused_ordering(13) 00:12:59.233 fused_ordering(14) 00:12:59.233 fused_ordering(15) 00:12:59.233 fused_ordering(16) 00:12:59.233 fused_ordering(17) 00:12:59.233 fused_ordering(18) 00:12:59.233 fused_ordering(19) 00:12:59.233 fused_ordering(20) 00:12:59.233 fused_ordering(21) 00:12:59.233 fused_ordering(22) 00:12:59.233 fused_ordering(23) 00:12:59.233 fused_ordering(24) 00:12:59.233 fused_ordering(25) 00:12:59.233 fused_ordering(26) 00:12:59.233 fused_ordering(27) 00:12:59.233 fused_ordering(28) 00:12:59.233 fused_ordering(29) 00:12:59.234 fused_ordering(30) 00:12:59.234 fused_ordering(31) 00:12:59.234 fused_ordering(32) 00:12:59.234 fused_ordering(33) 00:12:59.234 fused_ordering(34) 00:12:59.234 fused_ordering(35) 00:12:59.234 fused_ordering(36) 00:12:59.234 fused_ordering(37) 00:12:59.234 fused_ordering(38) 00:12:59.234 fused_ordering(39) 00:12:59.234 fused_ordering(40) 00:12:59.234 fused_ordering(41) 00:12:59.234 fused_ordering(42) 00:12:59.234 fused_ordering(43) 00:12:59.234 fused_ordering(44) 00:12:59.234 fused_ordering(45) 00:12:59.234 fused_ordering(46) 00:12:59.234 fused_ordering(47) 00:12:59.234 fused_ordering(48) 00:12:59.234 fused_ordering(49) 00:12:59.234 fused_ordering(50) 00:12:59.234 fused_ordering(51) 00:12:59.234 fused_ordering(52) 00:12:59.234 fused_ordering(53) 00:12:59.234 fused_ordering(54) 00:12:59.234 fused_ordering(55) 00:12:59.234 fused_ordering(56) 00:12:59.234 fused_ordering(57) 00:12:59.234 fused_ordering(58) 00:12:59.234 fused_ordering(59) 00:12:59.234 fused_ordering(60) 00:12:59.234 fused_ordering(61) 00:12:59.234 fused_ordering(62) 00:12:59.234 fused_ordering(63) 00:12:59.234 fused_ordering(64) 00:12:59.234 fused_ordering(65) 00:12:59.234 fused_ordering(66) 00:12:59.234 fused_ordering(67) 00:12:59.234 fused_ordering(68) 00:12:59.234 fused_ordering(69) 00:12:59.234 fused_ordering(70) 00:12:59.234 fused_ordering(71) 00:12:59.234 fused_ordering(72) 00:12:59.234 fused_ordering(73) 00:12:59.234 fused_ordering(74) 00:12:59.234 fused_ordering(75) 00:12:59.234 fused_ordering(76) 00:12:59.234 fused_ordering(77) 00:12:59.234 fused_ordering(78) 00:12:59.234 fused_ordering(79) 00:12:59.234 fused_ordering(80) 00:12:59.234 fused_ordering(81) 00:12:59.234 fused_ordering(82) 00:12:59.234 fused_ordering(83) 00:12:59.234 fused_ordering(84) 00:12:59.234 fused_ordering(85) 00:12:59.234 fused_ordering(86) 00:12:59.234 fused_ordering(87) 00:12:59.234 fused_ordering(88) 00:12:59.234 fused_ordering(89) 00:12:59.234 fused_ordering(90) 00:12:59.234 fused_ordering(91) 00:12:59.234 fused_ordering(92) 00:12:59.234 fused_ordering(93) 00:12:59.234 fused_ordering(94) 00:12:59.234 fused_ordering(95) 00:12:59.234 fused_ordering(96) 00:12:59.234 fused_ordering(97) 00:12:59.234 fused_ordering(98) 00:12:59.234 fused_ordering(99) 00:12:59.234 fused_ordering(100) 00:12:59.234 fused_ordering(101) 00:12:59.234 fused_ordering(102) 00:12:59.234 fused_ordering(103) 00:12:59.234 fused_ordering(104) 00:12:59.234 fused_ordering(105) 00:12:59.234 fused_ordering(106) 00:12:59.234 fused_ordering(107) 00:12:59.234 fused_ordering(108) 00:12:59.234 fused_ordering(109) 00:12:59.234 fused_ordering(110) 00:12:59.234 fused_ordering(111) 00:12:59.234 fused_ordering(112) 00:12:59.234 fused_ordering(113) 00:12:59.234 fused_ordering(114) 00:12:59.234 fused_ordering(115) 00:12:59.234 fused_ordering(116) 00:12:59.234 fused_ordering(117) 00:12:59.234 fused_ordering(118) 00:12:59.234 fused_ordering(119) 00:12:59.234 fused_ordering(120) 00:12:59.234 fused_ordering(121) 00:12:59.234 fused_ordering(122) 00:12:59.234 fused_ordering(123) 00:12:59.234 fused_ordering(124) 00:12:59.234 fused_ordering(125) 00:12:59.234 fused_ordering(126) 00:12:59.234 fused_ordering(127) 00:12:59.234 fused_ordering(128) 00:12:59.234 fused_ordering(129) 00:12:59.234 fused_ordering(130) 00:12:59.234 fused_ordering(131) 00:12:59.234 fused_ordering(132) 00:12:59.234 fused_ordering(133) 00:12:59.234 fused_ordering(134) 00:12:59.234 fused_ordering(135) 00:12:59.234 fused_ordering(136) 00:12:59.234 fused_ordering(137) 00:12:59.234 fused_ordering(138) 00:12:59.234 fused_ordering(139) 00:12:59.234 fused_ordering(140) 00:12:59.234 fused_ordering(141) 00:12:59.234 fused_ordering(142) 00:12:59.234 fused_ordering(143) 00:12:59.234 fused_ordering(144) 00:12:59.234 fused_ordering(145) 00:12:59.234 fused_ordering(146) 00:12:59.234 fused_ordering(147) 00:12:59.234 fused_ordering(148) 00:12:59.234 fused_ordering(149) 00:12:59.234 fused_ordering(150) 00:12:59.234 fused_ordering(151) 00:12:59.234 fused_ordering(152) 00:12:59.234 fused_ordering(153) 00:12:59.234 fused_ordering(154) 00:12:59.234 fused_ordering(155) 00:12:59.234 fused_ordering(156) 00:12:59.234 fused_ordering(157) 00:12:59.234 fused_ordering(158) 00:12:59.234 fused_ordering(159) 00:12:59.234 fused_ordering(160) 00:12:59.234 fused_ordering(161) 00:12:59.234 fused_ordering(162) 00:12:59.234 fused_ordering(163) 00:12:59.234 fused_ordering(164) 00:12:59.234 fused_ordering(165) 00:12:59.234 fused_ordering(166) 00:12:59.234 fused_ordering(167) 00:12:59.234 fused_ordering(168) 00:12:59.234 fused_ordering(169) 00:12:59.234 fused_ordering(170) 00:12:59.234 fused_ordering(171) 00:12:59.234 fused_ordering(172) 00:12:59.234 fused_ordering(173) 00:12:59.234 fused_ordering(174) 00:12:59.234 fused_ordering(175) 00:12:59.234 fused_ordering(176) 00:12:59.234 fused_ordering(177) 00:12:59.234 fused_ordering(178) 00:12:59.234 fused_ordering(179) 00:12:59.234 fused_ordering(180) 00:12:59.234 fused_ordering(181) 00:12:59.234 fused_ordering(182) 00:12:59.234 fused_ordering(183) 00:12:59.234 fused_ordering(184) 00:12:59.234 fused_ordering(185) 00:12:59.234 fused_ordering(186) 00:12:59.234 fused_ordering(187) 00:12:59.234 fused_ordering(188) 00:12:59.234 fused_ordering(189) 00:12:59.234 fused_ordering(190) 00:12:59.234 fused_ordering(191) 00:12:59.234 fused_ordering(192) 00:12:59.234 fused_ordering(193) 00:12:59.234 fused_ordering(194) 00:12:59.234 fused_ordering(195) 00:12:59.234 fused_ordering(196) 00:12:59.234 fused_ordering(197) 00:12:59.234 fused_ordering(198) 00:12:59.234 fused_ordering(199) 00:12:59.234 fused_ordering(200) 00:12:59.234 fused_ordering(201) 00:12:59.234 fused_ordering(202) 00:12:59.234 fused_ordering(203) 00:12:59.234 fused_ordering(204) 00:12:59.234 fused_ordering(205) 00:12:59.493 fused_ordering(206) 00:12:59.493 fused_ordering(207) 00:12:59.493 fused_ordering(208) 00:12:59.493 fused_ordering(209) 00:12:59.493 fused_ordering(210) 00:12:59.493 fused_ordering(211) 00:12:59.493 fused_ordering(212) 00:12:59.494 fused_ordering(213) 00:12:59.494 fused_ordering(214) 00:12:59.494 fused_ordering(215) 00:12:59.494 fused_ordering(216) 00:12:59.494 fused_ordering(217) 00:12:59.494 fused_ordering(218) 00:12:59.494 fused_ordering(219) 00:12:59.494 fused_ordering(220) 00:12:59.494 fused_ordering(221) 00:12:59.494 fused_ordering(222) 00:12:59.494 fused_ordering(223) 00:12:59.494 fused_ordering(224) 00:12:59.494 fused_ordering(225) 00:12:59.494 fused_ordering(226) 00:12:59.494 fused_ordering(227) 00:12:59.494 fused_ordering(228) 00:12:59.494 fused_ordering(229) 00:12:59.494 fused_ordering(230) 00:12:59.494 fused_ordering(231) 00:12:59.494 fused_ordering(232) 00:12:59.494 fused_ordering(233) 00:12:59.494 fused_ordering(234) 00:12:59.494 fused_ordering(235) 00:12:59.494 fused_ordering(236) 00:12:59.494 fused_ordering(237) 00:12:59.494 fused_ordering(238) 00:12:59.494 fused_ordering(239) 00:12:59.494 fused_ordering(240) 00:12:59.494 fused_ordering(241) 00:12:59.494 fused_ordering(242) 00:12:59.494 fused_ordering(243) 00:12:59.494 fused_ordering(244) 00:12:59.494 fused_ordering(245) 00:12:59.494 fused_ordering(246) 00:12:59.494 fused_ordering(247) 00:12:59.494 fused_ordering(248) 00:12:59.494 fused_ordering(249) 00:12:59.494 fused_ordering(250) 00:12:59.494 fused_ordering(251) 00:12:59.494 fused_ordering(252) 00:12:59.494 fused_ordering(253) 00:12:59.494 fused_ordering(254) 00:12:59.494 fused_ordering(255) 00:12:59.494 fused_ordering(256) 00:12:59.494 fused_ordering(257) 00:12:59.494 fused_ordering(258) 00:12:59.494 fused_ordering(259) 00:12:59.494 fused_ordering(260) 00:12:59.494 fused_ordering(261) 00:12:59.494 fused_ordering(262) 00:12:59.494 fused_ordering(263) 00:12:59.494 fused_ordering(264) 00:12:59.494 fused_ordering(265) 00:12:59.494 fused_ordering(266) 00:12:59.494 fused_ordering(267) 00:12:59.494 fused_ordering(268) 00:12:59.494 fused_ordering(269) 00:12:59.494 fused_ordering(270) 00:12:59.494 fused_ordering(271) 00:12:59.494 fused_ordering(272) 00:12:59.494 fused_ordering(273) 00:12:59.494 fused_ordering(274) 00:12:59.494 fused_ordering(275) 00:12:59.494 fused_ordering(276) 00:12:59.494 fused_ordering(277) 00:12:59.494 fused_ordering(278) 00:12:59.494 fused_ordering(279) 00:12:59.494 fused_ordering(280) 00:12:59.494 fused_ordering(281) 00:12:59.494 fused_ordering(282) 00:12:59.494 fused_ordering(283) 00:12:59.494 fused_ordering(284) 00:12:59.494 fused_ordering(285) 00:12:59.494 fused_ordering(286) 00:12:59.494 fused_ordering(287) 00:12:59.494 fused_ordering(288) 00:12:59.494 fused_ordering(289) 00:12:59.494 fused_ordering(290) 00:12:59.494 fused_ordering(291) 00:12:59.494 fused_ordering(292) 00:12:59.494 fused_ordering(293) 00:12:59.494 fused_ordering(294) 00:12:59.494 fused_ordering(295) 00:12:59.494 fused_ordering(296) 00:12:59.494 fused_ordering(297) 00:12:59.494 fused_ordering(298) 00:12:59.494 fused_ordering(299) 00:12:59.494 fused_ordering(300) 00:12:59.494 fused_ordering(301) 00:12:59.494 fused_ordering(302) 00:12:59.494 fused_ordering(303) 00:12:59.494 fused_ordering(304) 00:12:59.494 fused_ordering(305) 00:12:59.494 fused_ordering(306) 00:12:59.494 fused_ordering(307) 00:12:59.494 fused_ordering(308) 00:12:59.494 fused_ordering(309) 00:12:59.494 fused_ordering(310) 00:12:59.494 fused_ordering(311) 00:12:59.494 fused_ordering(312) 00:12:59.494 fused_ordering(313) 00:12:59.494 fused_ordering(314) 00:12:59.494 fused_ordering(315) 00:12:59.494 fused_ordering(316) 00:12:59.494 fused_ordering(317) 00:12:59.494 fused_ordering(318) 00:12:59.494 fused_ordering(319) 00:12:59.494 fused_ordering(320) 00:12:59.494 fused_ordering(321) 00:12:59.494 fused_ordering(322) 00:12:59.494 fused_ordering(323) 00:12:59.494 fused_ordering(324) 00:12:59.494 fused_ordering(325) 00:12:59.494 fused_ordering(326) 00:12:59.494 fused_ordering(327) 00:12:59.494 fused_ordering(328) 00:12:59.494 fused_ordering(329) 00:12:59.494 fused_ordering(330) 00:12:59.494 fused_ordering(331) 00:12:59.494 fused_ordering(332) 00:12:59.494 fused_ordering(333) 00:12:59.494 fused_ordering(334) 00:12:59.494 fused_ordering(335) 00:12:59.494 fused_ordering(336) 00:12:59.494 fused_ordering(337) 00:12:59.494 fused_ordering(338) 00:12:59.494 fused_ordering(339) 00:12:59.494 fused_ordering(340) 00:12:59.494 fused_ordering(341) 00:12:59.494 fused_ordering(342) 00:12:59.494 fused_ordering(343) 00:12:59.494 fused_ordering(344) 00:12:59.494 fused_ordering(345) 00:12:59.494 fused_ordering(346) 00:12:59.494 fused_ordering(347) 00:12:59.494 fused_ordering(348) 00:12:59.494 fused_ordering(349) 00:12:59.494 fused_ordering(350) 00:12:59.494 fused_ordering(351) 00:12:59.494 fused_ordering(352) 00:12:59.494 fused_ordering(353) 00:12:59.494 fused_ordering(354) 00:12:59.494 fused_ordering(355) 00:12:59.494 fused_ordering(356) 00:12:59.494 fused_ordering(357) 00:12:59.494 fused_ordering(358) 00:12:59.494 fused_ordering(359) 00:12:59.494 fused_ordering(360) 00:12:59.494 fused_ordering(361) 00:12:59.494 fused_ordering(362) 00:12:59.494 fused_ordering(363) 00:12:59.494 fused_ordering(364) 00:12:59.494 fused_ordering(365) 00:12:59.494 fused_ordering(366) 00:12:59.494 fused_ordering(367) 00:12:59.494 fused_ordering(368) 00:12:59.494 fused_ordering(369) 00:12:59.494 fused_ordering(370) 00:12:59.494 fused_ordering(371) 00:12:59.494 fused_ordering(372) 00:12:59.494 fused_ordering(373) 00:12:59.494 fused_ordering(374) 00:12:59.494 fused_ordering(375) 00:12:59.494 fused_ordering(376) 00:12:59.494 fused_ordering(377) 00:12:59.494 fused_ordering(378) 00:12:59.494 fused_ordering(379) 00:12:59.494 fused_ordering(380) 00:12:59.494 fused_ordering(381) 00:12:59.494 fused_ordering(382) 00:12:59.494 fused_ordering(383) 00:12:59.494 fused_ordering(384) 00:12:59.494 fused_ordering(385) 00:12:59.494 fused_ordering(386) 00:12:59.494 fused_ordering(387) 00:12:59.494 fused_ordering(388) 00:12:59.494 fused_ordering(389) 00:12:59.494 fused_ordering(390) 00:12:59.494 fused_ordering(391) 00:12:59.494 fused_ordering(392) 00:12:59.494 fused_ordering(393) 00:12:59.494 fused_ordering(394) 00:12:59.494 fused_ordering(395) 00:12:59.494 fused_ordering(396) 00:12:59.494 fused_ordering(397) 00:12:59.494 fused_ordering(398) 00:12:59.494 fused_ordering(399) 00:12:59.494 fused_ordering(400) 00:12:59.494 fused_ordering(401) 00:12:59.494 fused_ordering(402) 00:12:59.494 fused_ordering(403) 00:12:59.494 fused_ordering(404) 00:12:59.494 fused_ordering(405) 00:12:59.494 fused_ordering(406) 00:12:59.494 fused_ordering(407) 00:12:59.494 fused_ordering(408) 00:12:59.494 fused_ordering(409) 00:12:59.494 fused_ordering(410) 00:12:59.754 fused_ordering(411) 00:12:59.754 fused_ordering(412) 00:12:59.754 fused_ordering(413) 00:12:59.754 fused_ordering(414) 00:12:59.754 fused_ordering(415) 00:12:59.754 fused_ordering(416) 00:12:59.754 fused_ordering(417) 00:12:59.754 fused_ordering(418) 00:12:59.754 fused_ordering(419) 00:12:59.754 fused_ordering(420) 00:12:59.754 fused_ordering(421) 00:12:59.754 fused_ordering(422) 00:12:59.754 fused_ordering(423) 00:12:59.754 fused_ordering(424) 00:12:59.754 fused_ordering(425) 00:12:59.754 fused_ordering(426) 00:12:59.754 fused_ordering(427) 00:12:59.754 fused_ordering(428) 00:12:59.754 fused_ordering(429) 00:12:59.754 fused_ordering(430) 00:12:59.754 fused_ordering(431) 00:12:59.754 fused_ordering(432) 00:12:59.754 fused_ordering(433) 00:12:59.754 fused_ordering(434) 00:12:59.754 fused_ordering(435) 00:12:59.754 fused_ordering(436) 00:12:59.754 fused_ordering(437) 00:12:59.754 fused_ordering(438) 00:12:59.754 fused_ordering(439) 00:12:59.754 fused_ordering(440) 00:12:59.754 fused_ordering(441) 00:12:59.754 fused_ordering(442) 00:12:59.754 fused_ordering(443) 00:12:59.754 fused_ordering(444) 00:12:59.754 fused_ordering(445) 00:12:59.754 fused_ordering(446) 00:12:59.754 fused_ordering(447) 00:12:59.754 fused_ordering(448) 00:12:59.754 fused_ordering(449) 00:12:59.754 fused_ordering(450) 00:12:59.754 fused_ordering(451) 00:12:59.754 fused_ordering(452) 00:12:59.754 fused_ordering(453) 00:12:59.754 fused_ordering(454) 00:12:59.754 fused_ordering(455) 00:12:59.754 fused_ordering(456) 00:12:59.754 fused_ordering(457) 00:12:59.754 fused_ordering(458) 00:12:59.754 fused_ordering(459) 00:12:59.754 fused_ordering(460) 00:12:59.754 fused_ordering(461) 00:12:59.754 fused_ordering(462) 00:12:59.754 fused_ordering(463) 00:12:59.754 fused_ordering(464) 00:12:59.754 fused_ordering(465) 00:12:59.754 fused_ordering(466) 00:12:59.754 fused_ordering(467) 00:12:59.754 fused_ordering(468) 00:12:59.754 fused_ordering(469) 00:12:59.754 fused_ordering(470) 00:12:59.754 fused_ordering(471) 00:12:59.754 fused_ordering(472) 00:12:59.754 fused_ordering(473) 00:12:59.754 fused_ordering(474) 00:12:59.754 fused_ordering(475) 00:12:59.754 fused_ordering(476) 00:12:59.754 fused_ordering(477) 00:12:59.754 fused_ordering(478) 00:12:59.754 fused_ordering(479) 00:12:59.754 fused_ordering(480) 00:12:59.754 fused_ordering(481) 00:12:59.754 fused_ordering(482) 00:12:59.754 fused_ordering(483) 00:12:59.754 fused_ordering(484) 00:12:59.754 fused_ordering(485) 00:12:59.754 fused_ordering(486) 00:12:59.754 fused_ordering(487) 00:12:59.754 fused_ordering(488) 00:12:59.754 fused_ordering(489) 00:12:59.755 fused_ordering(490) 00:12:59.755 fused_ordering(491) 00:12:59.755 fused_ordering(492) 00:12:59.755 fused_ordering(493) 00:12:59.755 fused_ordering(494) 00:12:59.755 fused_ordering(495) 00:12:59.755 fused_ordering(496) 00:12:59.755 fused_ordering(497) 00:12:59.755 fused_ordering(498) 00:12:59.755 fused_ordering(499) 00:12:59.755 fused_ordering(500) 00:12:59.755 fused_ordering(501) 00:12:59.755 fused_ordering(502) 00:12:59.755 fused_ordering(503) 00:12:59.755 fused_ordering(504) 00:12:59.755 fused_ordering(505) 00:12:59.755 fused_ordering(506) 00:12:59.755 fused_ordering(507) 00:12:59.755 fused_ordering(508) 00:12:59.755 fused_ordering(509) 00:12:59.755 fused_ordering(510) 00:12:59.755 fused_ordering(511) 00:12:59.755 fused_ordering(512) 00:12:59.755 fused_ordering(513) 00:12:59.755 fused_ordering(514) 00:12:59.755 fused_ordering(515) 00:12:59.755 fused_ordering(516) 00:12:59.755 fused_ordering(517) 00:12:59.755 fused_ordering(518) 00:12:59.755 fused_ordering(519) 00:12:59.755 fused_ordering(520) 00:12:59.755 fused_ordering(521) 00:12:59.755 fused_ordering(522) 00:12:59.755 fused_ordering(523) 00:12:59.755 fused_ordering(524) 00:12:59.755 fused_ordering(525) 00:12:59.755 fused_ordering(526) 00:12:59.755 fused_ordering(527) 00:12:59.755 fused_ordering(528) 00:12:59.755 fused_ordering(529) 00:12:59.755 fused_ordering(530) 00:12:59.755 fused_ordering(531) 00:12:59.755 fused_ordering(532) 00:12:59.755 fused_ordering(533) 00:12:59.755 fused_ordering(534) 00:12:59.755 fused_ordering(535) 00:12:59.755 fused_ordering(536) 00:12:59.755 fused_ordering(537) 00:12:59.755 fused_ordering(538) 00:12:59.755 fused_ordering(539) 00:12:59.755 fused_ordering(540) 00:12:59.755 fused_ordering(541) 00:12:59.755 fused_ordering(542) 00:12:59.755 fused_ordering(543) 00:12:59.755 fused_ordering(544) 00:12:59.755 fused_ordering(545) 00:12:59.755 fused_ordering(546) 00:12:59.755 fused_ordering(547) 00:12:59.755 fused_ordering(548) 00:12:59.755 fused_ordering(549) 00:12:59.755 fused_ordering(550) 00:12:59.755 fused_ordering(551) 00:12:59.755 fused_ordering(552) 00:12:59.755 fused_ordering(553) 00:12:59.755 fused_ordering(554) 00:12:59.755 fused_ordering(555) 00:12:59.755 fused_ordering(556) 00:12:59.755 fused_ordering(557) 00:12:59.755 fused_ordering(558) 00:12:59.755 fused_ordering(559) 00:12:59.755 fused_ordering(560) 00:12:59.755 fused_ordering(561) 00:12:59.755 fused_ordering(562) 00:12:59.755 fused_ordering(563) 00:12:59.755 fused_ordering(564) 00:12:59.755 fused_ordering(565) 00:12:59.755 fused_ordering(566) 00:12:59.755 fused_ordering(567) 00:12:59.755 fused_ordering(568) 00:12:59.755 fused_ordering(569) 00:12:59.755 fused_ordering(570) 00:12:59.755 fused_ordering(571) 00:12:59.755 fused_ordering(572) 00:12:59.755 fused_ordering(573) 00:12:59.755 fused_ordering(574) 00:12:59.755 fused_ordering(575) 00:12:59.755 fused_ordering(576) 00:12:59.755 fused_ordering(577) 00:12:59.755 fused_ordering(578) 00:12:59.755 fused_ordering(579) 00:12:59.755 fused_ordering(580) 00:12:59.755 fused_ordering(581) 00:12:59.755 fused_ordering(582) 00:12:59.755 fused_ordering(583) 00:12:59.755 fused_ordering(584) 00:12:59.755 fused_ordering(585) 00:12:59.755 fused_ordering(586) 00:12:59.755 fused_ordering(587) 00:12:59.755 fused_ordering(588) 00:12:59.755 fused_ordering(589) 00:12:59.755 fused_ordering(590) 00:12:59.755 fused_ordering(591) 00:12:59.755 fused_ordering(592) 00:12:59.755 fused_ordering(593) 00:12:59.755 fused_ordering(594) 00:12:59.755 fused_ordering(595) 00:12:59.755 fused_ordering(596) 00:12:59.755 fused_ordering(597) 00:12:59.755 fused_ordering(598) 00:12:59.755 fused_ordering(599) 00:12:59.755 fused_ordering(600) 00:12:59.755 fused_ordering(601) 00:12:59.755 fused_ordering(602) 00:12:59.755 fused_ordering(603) 00:12:59.755 fused_ordering(604) 00:12:59.755 fused_ordering(605) 00:12:59.755 fused_ordering(606) 00:12:59.755 fused_ordering(607) 00:12:59.755 fused_ordering(608) 00:12:59.755 fused_ordering(609) 00:12:59.755 fused_ordering(610) 00:12:59.755 fused_ordering(611) 00:12:59.755 fused_ordering(612) 00:12:59.755 fused_ordering(613) 00:12:59.755 fused_ordering(614) 00:12:59.755 fused_ordering(615) 00:13:00.324 fused_ordering(616) 00:13:00.324 fused_ordering(617) 00:13:00.324 fused_ordering(618) 00:13:00.324 fused_ordering(619) 00:13:00.324 fused_ordering(620) 00:13:00.324 fused_ordering(621) 00:13:00.324 fused_ordering(622) 00:13:00.324 fused_ordering(623) 00:13:00.324 fused_ordering(624) 00:13:00.324 fused_ordering(625) 00:13:00.324 fused_ordering(626) 00:13:00.324 fused_ordering(627) 00:13:00.324 fused_ordering(628) 00:13:00.324 fused_ordering(629) 00:13:00.324 fused_ordering(630) 00:13:00.324 fused_ordering(631) 00:13:00.324 fused_ordering(632) 00:13:00.324 fused_ordering(633) 00:13:00.324 fused_ordering(634) 00:13:00.324 fused_ordering(635) 00:13:00.324 fused_ordering(636) 00:13:00.324 fused_ordering(637) 00:13:00.324 fused_ordering(638) 00:13:00.324 fused_ordering(639) 00:13:00.324 fused_ordering(640) 00:13:00.324 fused_ordering(641) 00:13:00.324 fused_ordering(642) 00:13:00.324 fused_ordering(643) 00:13:00.324 fused_ordering(644) 00:13:00.324 fused_ordering(645) 00:13:00.324 fused_ordering(646) 00:13:00.324 fused_ordering(647) 00:13:00.324 fused_ordering(648) 00:13:00.324 fused_ordering(649) 00:13:00.324 fused_ordering(650) 00:13:00.324 fused_ordering(651) 00:13:00.324 fused_ordering(652) 00:13:00.324 fused_ordering(653) 00:13:00.324 fused_ordering(654) 00:13:00.324 fused_ordering(655) 00:13:00.324 fused_ordering(656) 00:13:00.324 fused_ordering(657) 00:13:00.324 fused_ordering(658) 00:13:00.324 fused_ordering(659) 00:13:00.324 fused_ordering(660) 00:13:00.324 fused_ordering(661) 00:13:00.324 fused_ordering(662) 00:13:00.324 fused_ordering(663) 00:13:00.324 fused_ordering(664) 00:13:00.324 fused_ordering(665) 00:13:00.324 fused_ordering(666) 00:13:00.324 fused_ordering(667) 00:13:00.324 fused_ordering(668) 00:13:00.324 fused_ordering(669) 00:13:00.324 fused_ordering(670) 00:13:00.324 fused_ordering(671) 00:13:00.324 fused_ordering(672) 00:13:00.324 fused_ordering(673) 00:13:00.324 fused_ordering(674) 00:13:00.324 fused_ordering(675) 00:13:00.324 fused_ordering(676) 00:13:00.324 fused_ordering(677) 00:13:00.324 fused_ordering(678) 00:13:00.324 fused_ordering(679) 00:13:00.324 fused_ordering(680) 00:13:00.324 fused_ordering(681) 00:13:00.324 fused_ordering(682) 00:13:00.324 fused_ordering(683) 00:13:00.324 fused_ordering(684) 00:13:00.324 fused_ordering(685) 00:13:00.324 fused_ordering(686) 00:13:00.324 fused_ordering(687) 00:13:00.324 fused_ordering(688) 00:13:00.324 fused_ordering(689) 00:13:00.324 fused_ordering(690) 00:13:00.324 fused_ordering(691) 00:13:00.324 fused_ordering(692) 00:13:00.324 fused_ordering(693) 00:13:00.324 fused_ordering(694) 00:13:00.324 fused_ordering(695) 00:13:00.324 fused_ordering(696) 00:13:00.324 fused_ordering(697) 00:13:00.324 fused_ordering(698) 00:13:00.324 fused_ordering(699) 00:13:00.324 fused_ordering(700) 00:13:00.324 fused_ordering(701) 00:13:00.325 fused_ordering(702) 00:13:00.325 fused_ordering(703) 00:13:00.325 fused_ordering(704) 00:13:00.325 fused_ordering(705) 00:13:00.325 fused_ordering(706) 00:13:00.325 fused_ordering(707) 00:13:00.325 fused_ordering(708) 00:13:00.325 fused_ordering(709) 00:13:00.325 fused_ordering(710) 00:13:00.325 fused_ordering(711) 00:13:00.325 fused_ordering(712) 00:13:00.325 fused_ordering(713) 00:13:00.325 fused_ordering(714) 00:13:00.325 fused_ordering(715) 00:13:00.325 fused_ordering(716) 00:13:00.325 fused_ordering(717) 00:13:00.325 fused_ordering(718) 00:13:00.325 fused_ordering(719) 00:13:00.325 fused_ordering(720) 00:13:00.325 fused_ordering(721) 00:13:00.325 fused_ordering(722) 00:13:00.325 fused_ordering(723) 00:13:00.325 fused_ordering(724) 00:13:00.325 fused_ordering(725) 00:13:00.325 fused_ordering(726) 00:13:00.325 fused_ordering(727) 00:13:00.325 fused_ordering(728) 00:13:00.325 fused_ordering(729) 00:13:00.325 fused_ordering(730) 00:13:00.325 fused_ordering(731) 00:13:00.325 fused_ordering(732) 00:13:00.325 fused_ordering(733) 00:13:00.325 fused_ordering(734) 00:13:00.325 fused_ordering(735) 00:13:00.325 fused_ordering(736) 00:13:00.325 fused_ordering(737) 00:13:00.325 fused_ordering(738) 00:13:00.325 fused_ordering(739) 00:13:00.325 fused_ordering(740) 00:13:00.325 fused_ordering(741) 00:13:00.325 fused_ordering(742) 00:13:00.325 fused_ordering(743) 00:13:00.325 fused_ordering(744) 00:13:00.325 fused_ordering(745) 00:13:00.325 fused_ordering(746) 00:13:00.325 fused_ordering(747) 00:13:00.325 fused_ordering(748) 00:13:00.325 fused_ordering(749) 00:13:00.325 fused_ordering(750) 00:13:00.325 fused_ordering(751) 00:13:00.325 fused_ordering(752) 00:13:00.325 fused_ordering(753) 00:13:00.325 fused_ordering(754) 00:13:00.325 fused_ordering(755) 00:13:00.325 fused_ordering(756) 00:13:00.325 fused_ordering(757) 00:13:00.325 fused_ordering(758) 00:13:00.325 fused_ordering(759) 00:13:00.325 fused_ordering(760) 00:13:00.325 fused_ordering(761) 00:13:00.325 fused_ordering(762) 00:13:00.325 fused_ordering(763) 00:13:00.325 fused_ordering(764) 00:13:00.325 fused_ordering(765) 00:13:00.325 fused_ordering(766) 00:13:00.325 fused_ordering(767) 00:13:00.325 fused_ordering(768) 00:13:00.325 fused_ordering(769) 00:13:00.325 fused_ordering(770) 00:13:00.325 fused_ordering(771) 00:13:00.325 fused_ordering(772) 00:13:00.325 fused_ordering(773) 00:13:00.325 fused_ordering(774) 00:13:00.325 fused_ordering(775) 00:13:00.325 fused_ordering(776) 00:13:00.325 fused_ordering(777) 00:13:00.325 fused_ordering(778) 00:13:00.325 fused_ordering(779) 00:13:00.325 fused_ordering(780) 00:13:00.325 fused_ordering(781) 00:13:00.325 fused_ordering(782) 00:13:00.325 fused_ordering(783) 00:13:00.325 fused_ordering(784) 00:13:00.325 fused_ordering(785) 00:13:00.325 fused_ordering(786) 00:13:00.325 fused_ordering(787) 00:13:00.325 fused_ordering(788) 00:13:00.325 fused_ordering(789) 00:13:00.325 fused_ordering(790) 00:13:00.325 fused_ordering(791) 00:13:00.325 fused_ordering(792) 00:13:00.325 fused_ordering(793) 00:13:00.325 fused_ordering(794) 00:13:00.325 fused_ordering(795) 00:13:00.325 fused_ordering(796) 00:13:00.325 fused_ordering(797) 00:13:00.325 fused_ordering(798) 00:13:00.325 fused_ordering(799) 00:13:00.325 fused_ordering(800) 00:13:00.325 fused_ordering(801) 00:13:00.325 fused_ordering(802) 00:13:00.325 fused_ordering(803) 00:13:00.325 fused_ordering(804) 00:13:00.325 fused_ordering(805) 00:13:00.325 fused_ordering(806) 00:13:00.325 fused_ordering(807) 00:13:00.325 fused_ordering(808) 00:13:00.325 fused_ordering(809) 00:13:00.325 fused_ordering(810) 00:13:00.325 fused_ordering(811) 00:13:00.325 fused_ordering(812) 00:13:00.325 fused_ordering(813) 00:13:00.325 fused_ordering(814) 00:13:00.325 fused_ordering(815) 00:13:00.325 fused_ordering(816) 00:13:00.325 fused_ordering(817) 00:13:00.325 fused_ordering(818) 00:13:00.325 fused_ordering(819) 00:13:00.325 fused_ordering(820) 00:13:00.617 fused_o[2024-11-20 17:07:18.617364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258ff00 is same with the state(6) to be set 00:13:00.617 rdering(821) 00:13:00.617 fused_ordering(822) 00:13:00.617 fused_ordering(823) 00:13:00.617 fused_ordering(824) 00:13:00.617 fused_ordering(825) 00:13:00.617 fused_ordering(826) 00:13:00.617 fused_ordering(827) 00:13:00.617 fused_ordering(828) 00:13:00.617 fused_ordering(829) 00:13:00.617 fused_ordering(830) 00:13:00.617 fused_ordering(831) 00:13:00.617 fused_ordering(832) 00:13:00.617 fused_ordering(833) 00:13:00.617 fused_ordering(834) 00:13:00.617 fused_ordering(835) 00:13:00.617 fused_ordering(836) 00:13:00.617 fused_ordering(837) 00:13:00.617 fused_ordering(838) 00:13:00.617 fused_ordering(839) 00:13:00.617 fused_ordering(840) 00:13:00.617 fused_ordering(841) 00:13:00.617 fused_ordering(842) 00:13:00.617 fused_ordering(843) 00:13:00.617 fused_ordering(844) 00:13:00.617 fused_ordering(845) 00:13:00.617 fused_ordering(846) 00:13:00.617 fused_ordering(847) 00:13:00.617 fused_ordering(848) 00:13:00.617 fused_ordering(849) 00:13:00.617 fused_ordering(850) 00:13:00.617 fused_ordering(851) 00:13:00.617 fused_ordering(852) 00:13:00.617 fused_ordering(853) 00:13:00.617 fused_ordering(854) 00:13:00.617 fused_ordering(855) 00:13:00.617 fused_ordering(856) 00:13:00.617 fused_ordering(857) 00:13:00.617 fused_ordering(858) 00:13:00.617 fused_ordering(859) 00:13:00.617 fused_ordering(860) 00:13:00.617 fused_ordering(861) 00:13:00.617 fused_ordering(862) 00:13:00.617 fused_ordering(863) 00:13:00.617 fused_ordering(864) 00:13:00.617 fused_ordering(865) 00:13:00.617 fused_ordering(866) 00:13:00.617 fused_ordering(867) 00:13:00.617 fused_ordering(868) 00:13:00.617 fused_ordering(869) 00:13:00.617 fused_ordering(870) 00:13:00.617 fused_ordering(871) 00:13:00.617 fused_ordering(872) 00:13:00.617 fused_ordering(873) 00:13:00.617 fused_ordering(874) 00:13:00.617 fused_ordering(875) 00:13:00.617 fused_ordering(876) 00:13:00.617 fused_ordering(877) 00:13:00.617 fused_ordering(878) 00:13:00.617 fused_ordering(879) 00:13:00.617 fused_ordering(880) 00:13:00.617 fused_ordering(881) 00:13:00.617 fused_ordering(882) 00:13:00.617 fused_ordering(883) 00:13:00.617 fused_ordering(884) 00:13:00.617 fused_ordering(885) 00:13:00.617 fused_ordering(886) 00:13:00.617 fused_ordering(887) 00:13:00.617 fused_ordering(888) 00:13:00.617 fused_ordering(889) 00:13:00.617 fused_ordering(890) 00:13:00.617 fused_ordering(891) 00:13:00.617 fused_ordering(892) 00:13:00.617 fused_ordering(893) 00:13:00.617 fused_ordering(894) 00:13:00.617 fused_ordering(895) 00:13:00.617 fused_ordering(896) 00:13:00.617 fused_ordering(897) 00:13:00.617 fused_ordering(898) 00:13:00.617 fused_ordering(899) 00:13:00.617 fused_ordering(900) 00:13:00.617 fused_ordering(901) 00:13:00.617 fused_ordering(902) 00:13:00.617 fused_ordering(903) 00:13:00.617 fused_ordering(904) 00:13:00.617 fused_ordering(905) 00:13:00.617 fused_ordering(906) 00:13:00.617 fused_ordering(907) 00:13:00.617 fused_ordering(908) 00:13:00.617 fused_ordering(909) 00:13:00.617 fused_ordering(910) 00:13:00.617 fused_ordering(911) 00:13:00.617 fused_ordering(912) 00:13:00.617 fused_ordering(913) 00:13:00.617 fused_ordering(914) 00:13:00.617 fused_ordering(915) 00:13:00.617 fused_ordering(916) 00:13:00.617 fused_ordering(917) 00:13:00.617 fused_ordering(918) 00:13:00.617 fused_ordering(919) 00:13:00.617 fused_ordering(920) 00:13:00.617 fused_ordering(921) 00:13:00.617 fused_ordering(922) 00:13:00.617 fused_ordering(923) 00:13:00.617 fused_ordering(924) 00:13:00.617 fused_ordering(925) 00:13:00.617 fused_ordering(926) 00:13:00.617 fused_ordering(927) 00:13:00.617 fused_ordering(928) 00:13:00.617 fused_ordering(929) 00:13:00.617 fused_ordering(930) 00:13:00.617 fused_ordering(931) 00:13:00.617 fused_ordering(932) 00:13:00.617 fused_ordering(933) 00:13:00.617 fused_ordering(934) 00:13:00.617 fused_ordering(935) 00:13:00.617 fused_ordering(936) 00:13:00.617 fused_ordering(937) 00:13:00.617 fused_ordering(938) 00:13:00.617 fused_ordering(939) 00:13:00.617 fused_ordering(940) 00:13:00.617 fused_ordering(941) 00:13:00.617 fused_ordering(942) 00:13:00.617 fused_ordering(943) 00:13:00.617 fused_ordering(944) 00:13:00.617 fused_ordering(945) 00:13:00.617 fused_ordering(946) 00:13:00.617 fused_ordering(947) 00:13:00.617 fused_ordering(948) 00:13:00.617 fused_ordering(949) 00:13:00.617 fused_ordering(950) 00:13:00.617 fused_ordering(951) 00:13:00.617 fused_ordering(952) 00:13:00.617 fused_ordering(953) 00:13:00.617 fused_ordering(954) 00:13:00.617 fused_ordering(955) 00:13:00.617 fused_ordering(956) 00:13:00.617 fused_ordering(957) 00:13:00.617 fused_ordering(958) 00:13:00.617 fused_ordering(959) 00:13:00.617 fused_ordering(960) 00:13:00.617 fused_ordering(961) 00:13:00.617 fused_ordering(962) 00:13:00.617 fused_ordering(963) 00:13:00.617 fused_ordering(964) 00:13:00.617 fused_ordering(965) 00:13:00.617 fused_ordering(966) 00:13:00.617 fused_ordering(967) 00:13:00.617 fused_ordering(968) 00:13:00.617 fused_ordering(969) 00:13:00.617 fused_ordering(970) 00:13:00.617 fused_ordering(971) 00:13:00.618 fused_ordering(972) 00:13:00.618 fused_ordering(973) 00:13:00.618 fused_ordering(974) 00:13:00.618 fused_ordering(975) 00:13:00.618 fused_ordering(976) 00:13:00.618 fused_ordering(977) 00:13:00.618 fused_ordering(978) 00:13:00.618 fused_ordering(979) 00:13:00.618 fused_ordering(980) 00:13:00.618 fused_ordering(981) 00:13:00.618 fused_ordering(982) 00:13:00.618 fused_ordering(983) 00:13:00.618 fused_ordering(984) 00:13:00.618 fused_ordering(985) 00:13:00.618 fused_ordering(986) 00:13:00.618 fused_ordering(987) 00:13:00.618 fused_ordering(988) 00:13:00.618 fused_ordering(989) 00:13:00.618 fused_ordering(990) 00:13:00.618 fused_ordering(991) 00:13:00.618 fused_ordering(992) 00:13:00.618 fused_ordering(993) 00:13:00.618 fused_ordering(994) 00:13:00.618 fused_ordering(995) 00:13:00.618 fused_ordering(996) 00:13:00.618 fused_ordering(997) 00:13:00.618 fused_ordering(998) 00:13:00.618 fused_ordering(999) 00:13:00.618 fused_ordering(1000) 00:13:00.618 fused_ordering(1001) 00:13:00.618 fused_ordering(1002) 00:13:00.618 fused_ordering(1003) 00:13:00.618 fused_ordering(1004) 00:13:00.618 fused_ordering(1005) 00:13:00.618 fused_ordering(1006) 00:13:00.618 fused_ordering(1007) 00:13:00.618 fused_ordering(1008) 00:13:00.618 fused_ordering(1009) 00:13:00.618 fused_ordering(1010) 00:13:00.618 fused_ordering(1011) 00:13:00.618 fused_ordering(1012) 00:13:00.618 fused_ordering(1013) 00:13:00.618 fused_ordering(1014) 00:13:00.618 fused_ordering(1015) 00:13:00.618 fused_ordering(1016) 00:13:00.618 fused_ordering(1017) 00:13:00.618 fused_ordering(1018) 00:13:00.618 fused_ordering(1019) 00:13:00.618 fused_ordering(1020) 00:13:00.618 fused_ordering(1021) 00:13:00.618 fused_ordering(1022) 00:13:00.618 fused_ordering(1023) 00:13:00.618 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.923 rmmod nvme_tcp 00:13:00.923 rmmod nvme_fabrics 00:13:00.923 rmmod nvme_keyring 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:00.923 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2444714 ']' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2444714 ']' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444714' 00:13:00.924 killing process with pid 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2444714 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.924 17:07:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.485 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.485 00:13:03.485 real 0m10.775s 00:13:03.485 user 0m5.000s 00:13:03.485 sys 0m5.978s 00:13:03.485 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.485 17:07:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:03.485 ************************************ 00:13:03.485 END TEST nvmf_fused_ordering 00:13:03.485 ************************************ 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.485 ************************************ 00:13:03.485 START TEST nvmf_ns_masking 00:13:03.485 ************************************ 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:03.485 * Looking for test storage... 00:13:03.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.485 --rc genhtml_branch_coverage=1 00:13:03.485 --rc genhtml_function_coverage=1 00:13:03.485 --rc genhtml_legend=1 00:13:03.485 --rc geninfo_all_blocks=1 00:13:03.485 --rc geninfo_unexecuted_blocks=1 00:13:03.485 00:13:03.485 ' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.485 --rc genhtml_branch_coverage=1 00:13:03.485 --rc genhtml_function_coverage=1 00:13:03.485 --rc genhtml_legend=1 00:13:03.485 --rc geninfo_all_blocks=1 00:13:03.485 --rc geninfo_unexecuted_blocks=1 00:13:03.485 00:13:03.485 ' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.485 --rc genhtml_branch_coverage=1 00:13:03.485 --rc genhtml_function_coverage=1 00:13:03.485 --rc genhtml_legend=1 00:13:03.485 --rc geninfo_all_blocks=1 00:13:03.485 --rc geninfo_unexecuted_blocks=1 00:13:03.485 00:13:03.485 ' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.485 --rc genhtml_branch_coverage=1 00:13:03.485 --rc genhtml_function_coverage=1 00:13:03.485 --rc genhtml_legend=1 00:13:03.485 --rc geninfo_all_blocks=1 00:13:03.485 --rc geninfo_unexecuted_blocks=1 00:13:03.485 00:13:03.485 ' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.485 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c936ca11-d2db-4338-be20-e7a7eae4b431 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bb1c45eb-1b0f-4ab7-9f61-6451b861df3d 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=60f67b07-f32c-4937-a3c5-5b2c9bf91151 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.486 17:07:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:10.059 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:10.060 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:10.060 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:10.060 Found net devices under 0000:86:00.0: cvl_0_0 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:10.060 Found net devices under 0000:86:00.1: cvl_0_1 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.060 17:07:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:10.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:13:10.060 00:13:10.060 --- 10.0.0.2 ping statistics --- 00:13:10.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.060 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:13:10.060 00:13:10.060 --- 10.0.0.1 ping statistics --- 00:13:10.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.060 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.060 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2448673 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2448673 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2448673 ']' 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.061 [2024-11-20 17:07:27.304490] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:13:10.061 [2024-11-20 17:07:27.304542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.061 [2024-11-20 17:07:27.385160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.061 [2024-11-20 17:07:27.425510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.061 [2024-11-20 17:07:27.425545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.061 [2024-11-20 17:07:27.425552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.061 [2024-11-20 17:07:27.425558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.061 [2024-11-20 17:07:27.425563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.061 [2024-11-20 17:07:27.426100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.061 [2024-11-20 17:07:27.734447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:10.061 Malloc1 00:13:10.061 17:07:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.320 Malloc2 00:13:10.321 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.579 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:10.579 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.838 [2024-11-20 17:07:28.749886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.838 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:10.838 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60f67b07-f32c-4937-a3c5-5b2c9bf91151 -a 10.0.0.2 -s 4420 -i 4 00:13:11.097 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.097 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.097 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.097 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.097 17:07:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:12.998 [ 0]:0x1 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:12.998 17:07:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:12.998 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ade4dddee604acca3fb28c8e2bee271 00:13:12.998 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ade4dddee604acca3fb28c8e2bee271 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:12.998 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.258 [ 0]:0x1 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ade4dddee604acca3fb28c8e2bee271 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ade4dddee604acca3fb28c8e2bee271 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:13.258 [ 1]:0x2 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:13.258 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.517 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.775 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:13.775 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:13.775 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60f67b07-f32c-4937-a3c5-5b2c9bf91151 -a 10.0.0.2 -s 4420 -i 4 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:14.034 17:07:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:15.935 17:07:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.194 [ 0]:0x2 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.194 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.452 [ 0]:0x1 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ade4dddee604acca3fb28c8e2bee271 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ade4dddee604acca3fb28c8e2bee271 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.452 [ 1]:0x2 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.452 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.711 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.712 [ 0]:0x2 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.712 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.970 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:16.970 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.970 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:16.970 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.970 17:07:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60f67b07-f32c-4937-a3c5-5b2c9bf91151 -a 10.0.0.2 -s 4420 -i 4 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:17.229 17:07:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.762 [ 0]:0x1 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ade4dddee604acca3fb28c8e2bee271 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ade4dddee604acca3fb28c8e2bee271 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.762 [ 1]:0x2 00:13:19.762 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:19.763 [ 0]:0x2 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:19.763 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.022 [2024-11-20 17:07:37.855569] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:20.022 request: 00:13:20.022 { 00:13:20.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.022 "nsid": 2, 00:13:20.022 "host": "nqn.2016-06.io.spdk:host1", 00:13:20.022 "method": "nvmf_ns_remove_host", 00:13:20.022 "req_id": 1 00:13:20.022 } 00:13:20.022 Got JSON-RPC error response 00:13:20.022 response: 00:13:20.022 { 00:13:20.022 "code": -32602, 00:13:20.022 "message": "Invalid parameters" 00:13:20.022 } 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.022 [ 0]:0x2 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8cb1d586a2184acead073b3a549f457f 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8cb1d586a2184acead073b3a549f457f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:20.022 17:07:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2450667 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2450667 /var/tmp/host.sock 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2450667 ']' 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:20.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.022 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 [2024-11-20 17:07:38.084845] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:13:20.281 [2024-11-20 17:07:38.084892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450667 ] 00:13:20.281 [2024-11-20 17:07:38.157317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.281 [2024-11-20 17:07:38.197546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.539 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.539 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:20.539 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.797 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.798 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c936ca11-d2db-4338-be20-e7a7eae4b431 00:13:20.798 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:20.798 17:07:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C936CA11D2DB4338BE20E7A7EAE4B431 -i 00:13:21.056 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bb1c45eb-1b0f-4ab7-9f61-6451b861df3d 00:13:21.056 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:21.056 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BB1C45EB1B0F4AB79F616451B861DF3D -i 00:13:21.315 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:21.574 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:21.832 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:21.832 17:07:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:22.091 nvme0n1 00:13:22.091 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:22.091 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:22.658 nvme1n2 00:13:22.658 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:22.658 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:22.658 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:22.658 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:22.658 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:22.917 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:22.917 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:22.917 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:22.918 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:22.918 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c936ca11-d2db-4338-be20-e7a7eae4b431 == \c\9\3\6\c\a\1\1\-\d\2\d\b\-\4\3\3\8\-\b\e\2\0\-\e\7\a\7\e\a\e\4\b\4\3\1 ]] 00:13:22.918 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:22.918 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:22.918 17:07:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:23.177 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bb1c45eb-1b0f-4ab7-9f61-6451b861df3d == \b\b\1\c\4\5\e\b\-\1\b\0\f\-\4\a\b\7\-\9\f\6\1\-\6\4\5\1\b\8\6\1\d\f\3\d ]] 00:13:23.177 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.436 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c936ca11-d2db-4338-be20-e7a7eae4b431 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C936CA11D2DB4338BE20E7A7EAE4B431 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C936CA11D2DB4338BE20E7A7EAE4B431 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C936CA11D2DB4338BE20E7A7EAE4B431 00:13:23.696 [2024-11-20 17:07:41.666005] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:23.696 [2024-11-20 17:07:41.666037] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:23.696 [2024-11-20 17:07:41.666045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.696 request: 00:13:23.696 { 00:13:23.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.696 "namespace": { 00:13:23.696 "bdev_name": "invalid", 00:13:23.696 "nsid": 1, 00:13:23.696 "nguid": "C936CA11D2DB4338BE20E7A7EAE4B431", 00:13:23.696 "no_auto_visible": false, 00:13:23.696 "hide_metadata": false 00:13:23.696 }, 00:13:23.696 "method": "nvmf_subsystem_add_ns", 00:13:23.696 "req_id": 1 00:13:23.696 } 00:13:23.696 Got JSON-RPC error response 00:13:23.696 response: 00:13:23.696 { 00:13:23.696 "code": -32602, 00:13:23.696 "message": "Invalid parameters" 00:13:23.696 } 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c936ca11-d2db-4338-be20-e7a7eae4b431 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.696 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C936CA11D2DB4338BE20E7A7EAE4B431 -i 00:13:23.955 17:07:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:26.489 17:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:26.489 17:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:26.489 17:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2450667 ']' 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2450667' 00:13:26.489 killing process with pid 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2450667 00:13:26.489 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.748 rmmod nvme_tcp 00:13:26.748 rmmod nvme_fabrics 00:13:26.748 rmmod nvme_keyring 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2448673 ']' 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2448673 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2448673 ']' 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2448673 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.748 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2448673 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2448673' 00:13:27.007 killing process with pid 2448673 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2448673 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2448673 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.007 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.008 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.008 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.008 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.008 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.008 17:07:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.544 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.544 00:13:29.544 real 0m25.999s 00:13:29.545 user 0m31.314s 00:13:29.545 sys 0m7.001s 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.545 ************************************ 00:13:29.545 END TEST nvmf_ns_masking 00:13:29.545 ************************************ 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.545 ************************************ 00:13:29.545 START TEST nvmf_nvme_cli 00:13:29.545 ************************************ 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.545 * Looking for test storage... 00:13:29.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.545 --rc genhtml_branch_coverage=1 00:13:29.545 --rc genhtml_function_coverage=1 00:13:29.545 --rc genhtml_legend=1 00:13:29.545 --rc geninfo_all_blocks=1 00:13:29.545 --rc geninfo_unexecuted_blocks=1 00:13:29.545 00:13:29.545 ' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.545 --rc genhtml_branch_coverage=1 00:13:29.545 --rc genhtml_function_coverage=1 00:13:29.545 --rc genhtml_legend=1 00:13:29.545 --rc geninfo_all_blocks=1 00:13:29.545 --rc geninfo_unexecuted_blocks=1 00:13:29.545 00:13:29.545 ' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.545 --rc genhtml_branch_coverage=1 00:13:29.545 --rc genhtml_function_coverage=1 00:13:29.545 --rc genhtml_legend=1 00:13:29.545 --rc geninfo_all_blocks=1 00:13:29.545 --rc geninfo_unexecuted_blocks=1 00:13:29.545 00:13:29.545 ' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.545 --rc genhtml_branch_coverage=1 00:13:29.545 --rc genhtml_function_coverage=1 00:13:29.545 --rc genhtml_legend=1 00:13:29.545 --rc geninfo_all_blocks=1 00:13:29.545 --rc geninfo_unexecuted_blocks=1 00:13:29.545 00:13:29.545 ' 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.545 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.546 17:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.121 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:36.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:36.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:36.122 Found net devices under 0000:86:00.0: cvl_0_0 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:36.122 Found net devices under 0000:86:00.1: cvl_0_1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:13:36.122 00:13:36.122 --- 10.0.0.2 ping statistics --- 00:13:36.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.122 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:13:36.122 00:13:36.122 --- 10.0.0.1 ping statistics --- 00:13:36.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.122 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2455365 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2455365 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2455365 ']' 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.122 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.122 [2024-11-20 17:07:53.384844] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:13:36.122 [2024-11-20 17:07:53.384887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.122 [2024-11-20 17:07:53.464637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.122 [2024-11-20 17:07:53.510315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.123 [2024-11-20 17:07:53.510351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.123 [2024-11-20 17:07:53.510358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.123 [2024-11-20 17:07:53.510364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.123 [2024-11-20 17:07:53.510370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.123 [2024-11-20 17:07:53.511778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.123 [2024-11-20 17:07:53.511819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.123 [2024-11-20 17:07:53.511930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.123 [2024-11-20 17:07:53.511931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 [2024-11-20 17:07:53.654332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 Malloc0 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 Malloc1 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 [2024-11-20 17:07:53.754733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:36.123 00:13:36.123 Discovery Log Number of Records 2, Generation counter 2 00:13:36.123 =====Discovery Log Entry 0====== 00:13:36.123 trtype: tcp 00:13:36.123 adrfam: ipv4 00:13:36.123 subtype: current discovery subsystem 00:13:36.123 treq: not required 00:13:36.123 portid: 0 00:13:36.123 trsvcid: 4420 00:13:36.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:36.123 traddr: 10.0.0.2 00:13:36.123 eflags: explicit discovery connections, duplicate discovery information 00:13:36.123 sectype: none 00:13:36.123 =====Discovery Log Entry 1====== 00:13:36.123 trtype: tcp 00:13:36.123 adrfam: ipv4 00:13:36.123 subtype: nvme subsystem 00:13:36.123 treq: not required 00:13:36.123 portid: 0 00:13:36.123 trsvcid: 4420 00:13:36.123 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:36.123 traddr: 10.0.0.2 00:13:36.123 eflags: none 00:13:36.123 sectype: none 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:36.123 17:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:37.060 17:07:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:39.593 /dev/nvme0n2 ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:39.593 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.594 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:39.594 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:39.594 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.594 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:39.594 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.852 rmmod nvme_tcp 00:13:39.852 rmmod nvme_fabrics 00:13:39.852 rmmod nvme_keyring 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2455365 ']' 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2455365 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2455365 ']' 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2455365 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455365 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455365' 00:13:39.852 killing process with pid 2455365 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2455365 00:13:39.852 17:07:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2455365 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.112 17:07:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.648 00:13:42.648 real 0m12.976s 00:13:42.648 user 0m19.763s 00:13:42.648 sys 0m5.114s 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:42.648 ************************************ 00:13:42.648 END TEST nvmf_nvme_cli 00:13:42.648 ************************************ 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.648 ************************************ 00:13:42.648 START TEST nvmf_vfio_user 00:13:42.648 ************************************ 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:42.648 * Looking for test storage... 00:13:42.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.648 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.649 --rc genhtml_branch_coverage=1 00:13:42.649 --rc genhtml_function_coverage=1 00:13:42.649 --rc genhtml_legend=1 00:13:42.649 --rc geninfo_all_blocks=1 00:13:42.649 --rc geninfo_unexecuted_blocks=1 00:13:42.649 00:13:42.649 ' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.649 --rc genhtml_branch_coverage=1 00:13:42.649 --rc genhtml_function_coverage=1 00:13:42.649 --rc genhtml_legend=1 00:13:42.649 --rc geninfo_all_blocks=1 00:13:42.649 --rc geninfo_unexecuted_blocks=1 00:13:42.649 00:13:42.649 ' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.649 --rc genhtml_branch_coverage=1 00:13:42.649 --rc genhtml_function_coverage=1 00:13:42.649 --rc genhtml_legend=1 00:13:42.649 --rc geninfo_all_blocks=1 00:13:42.649 --rc geninfo_unexecuted_blocks=1 00:13:42.649 00:13:42.649 ' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.649 --rc genhtml_branch_coverage=1 00:13:42.649 --rc genhtml_function_coverage=1 00:13:42.649 --rc genhtml_legend=1 00:13:42.649 --rc geninfo_all_blocks=1 00:13:42.649 --rc geninfo_unexecuted_blocks=1 00:13:42.649 00:13:42.649 ' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:42.649 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2456642 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2456642' 00:13:42.650 Process pid: 2456642 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2456642 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2456642 ']' 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:42.650 [2024-11-20 17:08:00.446057] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:13:42.650 [2024-11-20 17:08:00.446102] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.650 [2024-11-20 17:08:00.523434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.650 [2024-11-20 17:08:00.566070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.650 [2024-11-20 17:08:00.566108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.650 [2024-11-20 17:08:00.566115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.650 [2024-11-20 17:08:00.566121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.650 [2024-11-20 17:08:00.566126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.650 [2024-11-20 17:08:00.567577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.650 [2024-11-20 17:08:00.567685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.650 [2024-11-20 17:08:00.567791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.650 [2024-11-20 17:08:00.567792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:42.650 17:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:44.026 17:08:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:44.284 Malloc1 00:13:44.284 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:44.284 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:44.542 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:44.801 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.801 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:44.801 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:45.060 Malloc2 00:13:45.060 17:08:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:45.319 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:45.319 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:45.578 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:45.578 [2024-11-20 17:08:03.551759] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:13:45.578 [2024-11-20 17:08:03.551792] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457163 ] 00:13:45.578 [2024-11-20 17:08:03.592679] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:45.578 [2024-11-20 17:08:03.598067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.578 [2024-11-20 17:08:03.598089] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4a07993000 00:13:45.578 [2024-11-20 17:08:03.599066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.600073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.601078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.602082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.603079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.604087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.605094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.606100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.578 [2024-11-20 17:08:03.607107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.578 [2024-11-20 17:08:03.607116] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4a07988000 00:13:45.578 [2024-11-20 17:08:03.608028] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.578 [2024-11-20 17:08:03.617475] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:45.578 [2024-11-20 17:08:03.617499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:45.839 [2024-11-20 17:08:03.623213] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:45.839 [2024-11-20 17:08:03.623248] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:45.839 [2024-11-20 17:08:03.623314] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:45.839 [2024-11-20 17:08:03.623327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:45.839 [2024-11-20 17:08:03.623332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:45.839 [2024-11-20 17:08:03.624213] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:45.839 [2024-11-20 17:08:03.624221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:45.839 [2024-11-20 17:08:03.624228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:45.839 [2024-11-20 17:08:03.625212] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:45.839 [2024-11-20 17:08:03.625220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:45.839 [2024-11-20 17:08:03.625226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.626218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:45.839 [2024-11-20 17:08:03.626226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.627225] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:45.839 [2024-11-20 17:08:03.627232] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:45.839 [2024-11-20 17:08:03.627237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.627242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.627350] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:45.839 [2024-11-20 17:08:03.627354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.627359] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:45.839 [2024-11-20 17:08:03.628231] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:45.839 [2024-11-20 17:08:03.629236] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:45.839 [2024-11-20 17:08:03.630240] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:45.839 [2024-11-20 17:08:03.631238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.839 [2024-11-20 17:08:03.631300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:45.839 [2024-11-20 17:08:03.632247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:45.839 [2024-11-20 17:08:03.632254] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:45.839 [2024-11-20 17:08:03.632259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:45.839 [2024-11-20 17:08:03.632275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:45.839 [2024-11-20 17:08:03.632282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:45.839 [2024-11-20 17:08:03.632295] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.839 [2024-11-20 17:08:03.632299] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.839 [2024-11-20 17:08:03.632303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.839 [2024-11-20 17:08:03.632314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.839 [2024-11-20 17:08:03.632361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:45.839 [2024-11-20 17:08:03.632370] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:45.839 [2024-11-20 17:08:03.632375] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:45.839 [2024-11-20 17:08:03.632378] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:45.839 [2024-11-20 17:08:03.632385] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:45.839 [2024-11-20 17:08:03.632391] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:45.839 [2024-11-20 17:08:03.632395] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:45.839 [2024-11-20 17:08:03.632400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:45.839 [2024-11-20 17:08:03.632408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:45.839 [2024-11-20 17:08:03.632417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:45.839 [2024-11-20 17:08:03.632431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.840 [2024-11-20 17:08:03.632448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.840 [2024-11-20 17:08:03.632456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.840 [2024-11-20 17:08:03.632463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.840 [2024-11-20 17:08:03.632467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632500] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:45.840 [2024-11-20 17:08:03.632505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632595] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:45.840 [2024-11-20 17:08:03.632599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:45.840 [2024-11-20 17:08:03.632603] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632631] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:45.840 [2024-11-20 17:08:03.632638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632650] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.840 [2024-11-20 17:08:03.632654] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.840 [2024-11-20 17:08:03.632657] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632707] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.840 [2024-11-20 17:08:03.632711] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.840 [2024-11-20 17:08:03.632713] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632768] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:45.840 [2024-11-20 17:08:03.632773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:45.840 [2024-11-20 17:08:03.632777] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:45.840 [2024-11-20 17:08:03.632794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632879] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:45.840 [2024-11-20 17:08:03.632883] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:45.840 [2024-11-20 17:08:03.632886] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:45.840 [2024-11-20 17:08:03.632889] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:45.840 [2024-11-20 17:08:03.632892] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:45.840 [2024-11-20 17:08:03.632898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:45.840 [2024-11-20 17:08:03.632904] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:45.840 [2024-11-20 17:08:03.632908] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:45.840 [2024-11-20 17:08:03.632911] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632922] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:45.840 [2024-11-20 17:08:03.632926] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.840 [2024-11-20 17:08:03.632929] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632941] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:45.840 [2024-11-20 17:08:03.632945] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:45.840 [2024-11-20 17:08:03.632947] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.840 [2024-11-20 17:08:03.632953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:45.840 [2024-11-20 17:08:03.632959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:45.840 [2024-11-20 17:08:03.632987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:45.840 ===================================================== 00:13:45.840 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.840 ===================================================== 00:13:45.840 Controller Capabilities/Features 00:13:45.840 ================================ 00:13:45.840 Vendor ID: 4e58 00:13:45.840 Subsystem Vendor ID: 4e58 00:13:45.840 Serial Number: SPDK1 00:13:45.840 Model Number: SPDK bdev Controller 00:13:45.840 Firmware Version: 25.01 00:13:45.840 Recommended Arb Burst: 6 00:13:45.840 IEEE OUI Identifier: 8d 6b 50 00:13:45.840 Multi-path I/O 00:13:45.840 May have multiple subsystem ports: Yes 00:13:45.840 May have multiple controllers: Yes 00:13:45.840 Associated with SR-IOV VF: No 00:13:45.841 Max Data Transfer Size: 131072 00:13:45.841 Max Number of Namespaces: 32 00:13:45.841 Max Number of I/O Queues: 127 00:13:45.841 NVMe Specification Version (VS): 1.3 00:13:45.841 NVMe Specification Version (Identify): 1.3 00:13:45.841 Maximum Queue Entries: 256 00:13:45.841 Contiguous Queues Required: Yes 00:13:45.841 Arbitration Mechanisms Supported 00:13:45.841 Weighted Round Robin: Not Supported 00:13:45.841 Vendor Specific: Not Supported 00:13:45.841 Reset Timeout: 15000 ms 00:13:45.841 Doorbell Stride: 4 bytes 00:13:45.841 NVM Subsystem Reset: Not Supported 00:13:45.841 Command Sets Supported 00:13:45.841 NVM Command Set: Supported 00:13:45.841 Boot Partition: Not Supported 00:13:45.841 Memory Page Size Minimum: 4096 bytes 00:13:45.841 Memory Page Size Maximum: 4096 bytes 00:13:45.841 Persistent Memory Region: Not Supported 00:13:45.841 Optional Asynchronous Events Supported 00:13:45.841 Namespace Attribute Notices: Supported 00:13:45.841 Firmware Activation Notices: Not Supported 00:13:45.841 ANA Change Notices: Not Supported 00:13:45.841 PLE Aggregate Log Change Notices: Not Supported 00:13:45.841 LBA Status Info Alert Notices: Not Supported 00:13:45.841 EGE Aggregate Log Change Notices: Not Supported 00:13:45.841 Normal NVM Subsystem Shutdown event: Not Supported 00:13:45.841 Zone Descriptor Change Notices: Not Supported 00:13:45.841 Discovery Log Change Notices: Not Supported 00:13:45.841 Controller Attributes 00:13:45.841 128-bit Host Identifier: Supported 00:13:45.841 Non-Operational Permissive Mode: Not Supported 00:13:45.841 NVM Sets: Not Supported 00:13:45.841 Read Recovery Levels: Not Supported 00:13:45.841 Endurance Groups: Not Supported 00:13:45.841 Predictable Latency Mode: Not Supported 00:13:45.841 Traffic Based Keep ALive: Not Supported 00:13:45.841 Namespace Granularity: Not Supported 00:13:45.841 SQ Associations: Not Supported 00:13:45.841 UUID List: Not Supported 00:13:45.841 Multi-Domain Subsystem: Not Supported 00:13:45.841 Fixed Capacity Management: Not Supported 00:13:45.841 Variable Capacity Management: Not Supported 00:13:45.841 Delete Endurance Group: Not Supported 00:13:45.841 Delete NVM Set: Not Supported 00:13:45.841 Extended LBA Formats Supported: Not Supported 00:13:45.841 Flexible Data Placement Supported: Not Supported 00:13:45.841 00:13:45.841 Controller Memory Buffer Support 00:13:45.841 ================================ 00:13:45.841 Supported: No 00:13:45.841 00:13:45.841 Persistent Memory Region Support 00:13:45.841 ================================ 00:13:45.841 Supported: No 00:13:45.841 00:13:45.841 Admin Command Set Attributes 00:13:45.841 ============================ 00:13:45.841 Security Send/Receive: Not Supported 00:13:45.841 Format NVM: Not Supported 00:13:45.841 Firmware Activate/Download: Not Supported 00:13:45.841 Namespace Management: Not Supported 00:13:45.841 Device Self-Test: Not Supported 00:13:45.841 Directives: Not Supported 00:13:45.841 NVMe-MI: Not Supported 00:13:45.841 Virtualization Management: Not Supported 00:13:45.841 Doorbell Buffer Config: Not Supported 00:13:45.841 Get LBA Status Capability: Not Supported 00:13:45.841 Command & Feature Lockdown Capability: Not Supported 00:13:45.841 Abort Command Limit: 4 00:13:45.841 Async Event Request Limit: 4 00:13:45.841 Number of Firmware Slots: N/A 00:13:45.841 Firmware Slot 1 Read-Only: N/A 00:13:45.841 Firmware Activation Without Reset: N/A 00:13:45.841 Multiple Update Detection Support: N/A 00:13:45.841 Firmware Update Granularity: No Information Provided 00:13:45.841 Per-Namespace SMART Log: No 00:13:45.841 Asymmetric Namespace Access Log Page: Not Supported 00:13:45.841 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:45.841 Command Effects Log Page: Supported 00:13:45.841 Get Log Page Extended Data: Supported 00:13:45.841 Telemetry Log Pages: Not Supported 00:13:45.841 Persistent Event Log Pages: Not Supported 00:13:45.841 Supported Log Pages Log Page: May Support 00:13:45.841 Commands Supported & Effects Log Page: Not Supported 00:13:45.841 Feature Identifiers & Effects Log Page:May Support 00:13:45.841 NVMe-MI Commands & Effects Log Page: May Support 00:13:45.841 Data Area 4 for Telemetry Log: Not Supported 00:13:45.841 Error Log Page Entries Supported: 128 00:13:45.841 Keep Alive: Supported 00:13:45.841 Keep Alive Granularity: 10000 ms 00:13:45.841 00:13:45.841 NVM Command Set Attributes 00:13:45.841 ========================== 00:13:45.841 Submission Queue Entry Size 00:13:45.841 Max: 64 00:13:45.841 Min: 64 00:13:45.841 Completion Queue Entry Size 00:13:45.841 Max: 16 00:13:45.841 Min: 16 00:13:45.841 Number of Namespaces: 32 00:13:45.841 Compare Command: Supported 00:13:45.841 Write Uncorrectable Command: Not Supported 00:13:45.841 Dataset Management Command: Supported 00:13:45.841 Write Zeroes Command: Supported 00:13:45.841 Set Features Save Field: Not Supported 00:13:45.841 Reservations: Not Supported 00:13:45.841 Timestamp: Not Supported 00:13:45.841 Copy: Supported 00:13:45.841 Volatile Write Cache: Present 00:13:45.841 Atomic Write Unit (Normal): 1 00:13:45.841 Atomic Write Unit (PFail): 1 00:13:45.841 Atomic Compare & Write Unit: 1 00:13:45.841 Fused Compare & Write: Supported 00:13:45.841 Scatter-Gather List 00:13:45.841 SGL Command Set: Supported (Dword aligned) 00:13:45.841 SGL Keyed: Not Supported 00:13:45.841 SGL Bit Bucket Descriptor: Not Supported 00:13:45.841 SGL Metadata Pointer: Not Supported 00:13:45.841 Oversized SGL: Not Supported 00:13:45.841 SGL Metadata Address: Not Supported 00:13:45.841 SGL Offset: Not Supported 00:13:45.841 Transport SGL Data Block: Not Supported 00:13:45.841 Replay Protected Memory Block: Not Supported 00:13:45.841 00:13:45.841 Firmware Slot Information 00:13:45.841 ========================= 00:13:45.841 Active slot: 1 00:13:45.841 Slot 1 Firmware Revision: 25.01 00:13:45.841 00:13:45.841 00:13:45.841 Commands Supported and Effects 00:13:45.841 ============================== 00:13:45.841 Admin Commands 00:13:45.841 -------------- 00:13:45.841 Get Log Page (02h): Supported 00:13:45.841 Identify (06h): Supported 00:13:45.841 Abort (08h): Supported 00:13:45.841 Set Features (09h): Supported 00:13:45.841 Get Features (0Ah): Supported 00:13:45.841 Asynchronous Event Request (0Ch): Supported 00:13:45.841 Keep Alive (18h): Supported 00:13:45.841 I/O Commands 00:13:45.841 ------------ 00:13:45.841 Flush (00h): Supported LBA-Change 00:13:45.841 Write (01h): Supported LBA-Change 00:13:45.841 Read (02h): Supported 00:13:45.841 Compare (05h): Supported 00:13:45.841 Write Zeroes (08h): Supported LBA-Change 00:13:45.841 Dataset Management (09h): Supported LBA-Change 00:13:45.841 Copy (19h): Supported LBA-Change 00:13:45.841 00:13:45.841 Error Log 00:13:45.841 ========= 00:13:45.841 00:13:45.841 Arbitration 00:13:45.841 =========== 00:13:45.841 Arbitration Burst: 1 00:13:45.841 00:13:45.841 Power Management 00:13:45.841 ================ 00:13:45.841 Number of Power States: 1 00:13:45.841 Current Power State: Power State #0 00:13:45.841 Power State #0: 00:13:45.841 Max Power: 0.00 W 00:13:45.841 Non-Operational State: Operational 00:13:45.841 Entry Latency: Not Reported 00:13:45.841 Exit Latency: Not Reported 00:13:45.841 Relative Read Throughput: 0 00:13:45.841 Relative Read Latency: 0 00:13:45.841 Relative Write Throughput: 0 00:13:45.841 Relative Write Latency: 0 00:13:45.841 Idle Power: Not Reported 00:13:45.841 Active Power: Not Reported 00:13:45.841 Non-Operational Permissive Mode: Not Supported 00:13:45.841 00:13:45.841 Health Information 00:13:45.841 ================== 00:13:45.841 Critical Warnings: 00:13:45.841 Available Spare Space: OK 00:13:45.841 Temperature: OK 00:13:45.841 Device Reliability: OK 00:13:45.841 Read Only: No 00:13:45.841 Volatile Memory Backup: OK 00:13:45.841 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:45.841 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:45.841 Available Spare: 0% 00:13:45.841 Available Sp[2024-11-20 17:08:03.633067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:45.841 [2024-11-20 17:08:03.633075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:45.841 [2024-11-20 17:08:03.633097] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:45.841 [2024-11-20 17:08:03.633105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.841 [2024-11-20 17:08:03.633111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.842 [2024-11-20 17:08:03.633116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.842 [2024-11-20 17:08:03.633122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.842 [2024-11-20 17:08:03.633257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:45.842 [2024-11-20 17:08:03.633268] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:45.842 [2024-11-20 17:08:03.634260] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.842 [2024-11-20 17:08:03.634311] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:45.842 [2024-11-20 17:08:03.634317] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:45.842 [2024-11-20 17:08:03.635265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:45.842 [2024-11-20 17:08:03.635274] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:45.842 [2024-11-20 17:08:03.635320] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:45.842 [2024-11-20 17:08:03.638207] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.842 are Threshold: 0% 00:13:45.842 Life Percentage Used: 0% 00:13:45.842 Data Units Read: 0 00:13:45.842 Data Units Written: 0 00:13:45.842 Host Read Commands: 0 00:13:45.842 Host Write Commands: 0 00:13:45.842 Controller Busy Time: 0 minutes 00:13:45.842 Power Cycles: 0 00:13:45.842 Power On Hours: 0 hours 00:13:45.842 Unsafe Shutdowns: 0 00:13:45.842 Unrecoverable Media Errors: 0 00:13:45.842 Lifetime Error Log Entries: 0 00:13:45.842 Warning Temperature Time: 0 minutes 00:13:45.842 Critical Temperature Time: 0 minutes 00:13:45.842 00:13:45.842 Number of Queues 00:13:45.842 ================ 00:13:45.842 Number of I/O Submission Queues: 127 00:13:45.842 Number of I/O Completion Queues: 127 00:13:45.842 00:13:45.842 Active Namespaces 00:13:45.842 ================= 00:13:45.842 Namespace ID:1 00:13:45.842 Error Recovery Timeout: Unlimited 00:13:45.842 Command Set Identifier: NVM (00h) 00:13:45.842 Deallocate: Supported 00:13:45.842 Deallocated/Unwritten Error: Not Supported 00:13:45.842 Deallocated Read Value: Unknown 00:13:45.842 Deallocate in Write Zeroes: Not Supported 00:13:45.842 Deallocated Guard Field: 0xFFFF 00:13:45.842 Flush: Supported 00:13:45.842 Reservation: Supported 00:13:45.842 Namespace Sharing Capabilities: Multiple Controllers 00:13:45.842 Size (in LBAs): 131072 (0GiB) 00:13:45.842 Capacity (in LBAs): 131072 (0GiB) 00:13:45.842 Utilization (in LBAs): 131072 (0GiB) 00:13:45.842 NGUID: 915205BDDC5045F786FE2055863A4649 00:13:45.842 UUID: 915205bd-dc50-45f7-86fe-2055863a4649 00:13:45.842 Thin Provisioning: Not Supported 00:13:45.842 Per-NS Atomic Units: Yes 00:13:45.842 Atomic Boundary Size (Normal): 0 00:13:45.842 Atomic Boundary Size (PFail): 0 00:13:45.842 Atomic Boundary Offset: 0 00:13:45.842 Maximum Single Source Range Length: 65535 00:13:45.842 Maximum Copy Length: 65535 00:13:45.842 Maximum Source Range Count: 1 00:13:45.842 NGUID/EUI64 Never Reused: No 00:13:45.842 Namespace Write Protected: No 00:13:45.842 Number of LBA Formats: 1 00:13:45.842 Current LBA Format: LBA Format #00 00:13:45.842 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:45.842 00:13:45.842 17:08:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:45.842 [2024-11-20 17:08:03.867277] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.114 Initializing NVMe Controllers 00:13:51.114 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:51.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:51.114 Initialization complete. Launching workers. 00:13:51.114 ======================================================== 00:13:51.114 Latency(us) 00:13:51.114 Device Information : IOPS MiB/s Average min max 00:13:51.114 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39947.61 156.05 3204.02 941.47 9420.47 00:13:51.114 ======================================================== 00:13:51.114 Total : 39947.61 156.05 3204.02 941.47 9420.47 00:13:51.114 00:13:51.114 [2024-11-20 17:08:08.890622] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.114 17:08:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:51.114 [2024-11-20 17:08:09.124750] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:56.386 Initializing NVMe Controllers 00:13:56.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:56.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:56.386 Initialization complete. Launching workers. 00:13:56.386 ======================================================== 00:13:56.386 Latency(us) 00:13:56.386 Device Information : IOPS MiB/s Average min max 00:13:56.386 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.00 62.67 7986.38 4988.36 15447.79 00:13:56.386 ======================================================== 00:13:56.386 Total : 16044.00 62.67 7986.38 4988.36 15447.79 00:13:56.386 00:13:56.386 [2024-11-20 17:08:14.160264] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:56.386 17:08:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:56.386 [2024-11-20 17:08:14.371273] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:01.653 [2024-11-20 17:08:19.455527] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:01.653 Initializing NVMe Controllers 00:14:01.653 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:01.653 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:01.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:01.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:01.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:01.653 Initialization complete. Launching workers. 00:14:01.653 Starting thread on core 2 00:14:01.653 Starting thread on core 3 00:14:01.653 Starting thread on core 1 00:14:01.653 17:08:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:01.912 [2024-11-20 17:08:19.743283] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.296 [2024-11-20 17:08:22.807772] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.296 Initializing NVMe Controllers 00:14:05.296 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.296 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:05.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:05.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:05.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:05.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:05.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:05.296 Initialization complete. Launching workers. 00:14:05.296 Starting thread on core 1 with urgent priority queue 00:14:05.296 Starting thread on core 2 with urgent priority queue 00:14:05.296 Starting thread on core 3 with urgent priority queue 00:14:05.296 Starting thread on core 0 with urgent priority queue 00:14:05.296 SPDK bdev Controller (SPDK1 ) core 0: 8078.67 IO/s 12.38 secs/100000 ios 00:14:05.296 SPDK bdev Controller (SPDK1 ) core 1: 8585.33 IO/s 11.65 secs/100000 ios 00:14:05.296 SPDK bdev Controller (SPDK1 ) core 2: 9615.67 IO/s 10.40 secs/100000 ios 00:14:05.296 SPDK bdev Controller (SPDK1 ) core 3: 7777.00 IO/s 12.86 secs/100000 ios 00:14:05.296 ======================================================== 00:14:05.296 00:14:05.296 17:08:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:05.296 [2024-11-20 17:08:23.092644] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.296 Initializing NVMe Controllers 00:14:05.296 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.296 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.296 Namespace ID: 1 size: 0GB 00:14:05.296 Initialization complete. 00:14:05.296 INFO: using host memory buffer for IO 00:14:05.296 Hello world! 00:14:05.296 [2024-11-20 17:08:23.127871] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.296 17:08:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:05.555 [2024-11-20 17:08:23.405591] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:06.493 Initializing NVMe Controllers 00:14:06.493 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.493 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:06.493 Initialization complete. Launching workers. 00:14:06.493 submit (in ns) avg, min, max = 6330.8, 3197.1, 3999721.0 00:14:06.493 complete (in ns) avg, min, max = 20743.8, 1749.5, 3999156.2 00:14:06.493 00:14:06.493 Submit histogram 00:14:06.493 ================ 00:14:06.493 Range in us Cumulative Count 00:14:06.493 3.185 - 3.200: 0.0061% ( 1) 00:14:06.493 3.200 - 3.215: 0.0667% ( 10) 00:14:06.493 3.215 - 3.230: 0.1698% ( 17) 00:14:06.493 3.230 - 3.246: 0.4063% ( 39) 00:14:06.493 3.246 - 3.261: 0.9095% ( 83) 00:14:06.493 3.261 - 3.276: 2.9044% ( 329) 00:14:06.493 3.276 - 3.291: 7.5006% ( 758) 00:14:06.493 3.291 - 3.307: 14.0735% ( 1084) 00:14:06.493 3.307 - 3.322: 20.8586% ( 1119) 00:14:06.493 3.322 - 3.337: 28.0136% ( 1180) 00:14:06.493 3.337 - 3.352: 33.8892% ( 969) 00:14:06.493 3.352 - 3.368: 39.3706% ( 904) 00:14:06.493 3.368 - 3.383: 45.7252% ( 1048) 00:14:06.493 3.383 - 3.398: 52.2617% ( 1078) 00:14:06.493 3.398 - 3.413: 57.3733% ( 843) 00:14:06.493 3.413 - 3.429: 63.1640% ( 955) 00:14:06.493 3.429 - 3.444: 70.6342% ( 1232) 00:14:06.493 3.444 - 3.459: 75.3820% ( 783) 00:14:06.493 3.459 - 3.474: 80.0873% ( 776) 00:14:06.493 3.474 - 3.490: 83.6163% ( 582) 00:14:06.493 3.490 - 3.505: 85.8537% ( 369) 00:14:06.493 3.505 - 3.520: 87.2059% ( 223) 00:14:06.493 3.520 - 3.535: 87.7031% ( 82) 00:14:06.493 3.535 - 3.550: 88.0002% ( 49) 00:14:06.493 3.550 - 3.566: 88.3701% ( 61) 00:14:06.493 3.566 - 3.581: 88.9401% ( 94) 00:14:06.493 3.581 - 3.596: 89.6556% ( 118) 00:14:06.493 3.596 - 3.611: 90.5772% ( 152) 00:14:06.493 3.611 - 3.627: 91.5292% ( 157) 00:14:06.493 3.627 - 3.642: 92.4388% ( 150) 00:14:06.493 3.642 - 3.657: 93.5302% ( 180) 00:14:06.493 3.657 - 3.672: 94.5246% ( 164) 00:14:06.493 3.672 - 3.688: 95.4948% ( 160) 00:14:06.493 3.688 - 3.703: 96.4104% ( 151) 00:14:06.493 3.703 - 3.718: 97.2714% ( 142) 00:14:06.493 3.718 - 3.733: 97.9869% ( 118) 00:14:06.493 3.733 - 3.749: 98.4659% ( 79) 00:14:06.493 3.749 - 3.764: 98.8358% ( 61) 00:14:06.493 3.764 - 3.779: 99.1390% ( 50) 00:14:06.493 3.779 - 3.794: 99.3027% ( 27) 00:14:06.493 3.794 - 3.810: 99.4725% ( 28) 00:14:06.493 3.810 - 3.825: 99.5270% ( 9) 00:14:06.493 3.825 - 3.840: 99.5877% ( 10) 00:14:06.493 3.840 - 3.855: 99.6241% ( 6) 00:14:06.493 3.855 - 3.870: 99.6362% ( 2) 00:14:06.493 3.870 - 3.886: 99.6423% ( 1) 00:14:06.493 3.886 - 3.901: 99.6483% ( 1) 00:14:06.493 3.901 - 3.931: 99.6544% ( 1) 00:14:06.493 4.023 - 4.053: 99.6604% ( 1) 00:14:06.493 5.303 - 5.333: 99.6665% ( 1) 00:14:06.493 5.425 - 5.455: 99.6726% ( 1) 00:14:06.493 5.608 - 5.638: 99.6786% ( 1) 00:14:06.493 5.730 - 5.760: 99.6847% ( 1) 00:14:06.493 5.851 - 5.882: 99.6908% ( 1) 00:14:06.493 6.065 - 6.095: 99.6968% ( 1) 00:14:06.493 6.156 - 6.187: 99.7029% ( 1) 00:14:06.493 6.491 - 6.522: 99.7089% ( 1) 00:14:06.493 6.552 - 6.583: 99.7150% ( 1) 00:14:06.493 6.613 - 6.644: 99.7211% ( 1) 00:14:06.493 6.644 - 6.674: 99.7271% ( 1) 00:14:06.493 6.674 - 6.705: 99.7332% ( 1) 00:14:06.493 6.766 - 6.796: 99.7393% ( 1) 00:14:06.493 6.857 - 6.888: 99.7514% ( 2) 00:14:06.493 6.888 - 6.918: 99.7635% ( 2) 00:14:06.493 6.918 - 6.949: 99.7696% ( 1) 00:14:06.493 6.949 - 6.979: 99.7756% ( 1) 00:14:06.493 7.010 - 7.040: 99.7817% ( 1) 00:14:06.493 7.070 - 7.101: 99.7878% ( 1) 00:14:06.493 7.131 - 7.162: 99.7938% ( 1) 00:14:06.493 7.162 - 7.192: 99.7999% ( 1) 00:14:06.493 7.192 - 7.223: 99.8060% ( 1) 00:14:06.493 7.314 - 7.345: 99.8181% ( 2) 00:14:06.493 7.345 - 7.375: 99.8242% ( 1) 00:14:06.493 7.375 - 7.406: 99.8302% ( 1) 00:14:06.493 7.497 - 7.528: 99.8363% ( 1) 00:14:06.493 7.650 - 7.680: 99.8484% ( 2) 00:14:06.493 7.771 - 7.802: 99.8545% ( 1) 00:14:06.493 7.802 - 7.863: 99.8666% ( 2) 00:14:06.493 7.924 - 7.985: 99.8727% ( 1) 00:14:06.493 [2024-11-20 17:08:24.426597] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:06.494 7.985 - 8.046: 99.8787% ( 1) 00:14:06.494 8.290 - 8.350: 99.8848% ( 1) 00:14:06.494 8.350 - 8.411: 99.8909% ( 1) 00:14:06.494 8.472 - 8.533: 99.8969% ( 1) 00:14:06.494 8.655 - 8.716: 99.9030% ( 1) 00:14:06.494 8.777 - 8.838: 99.9151% ( 2) 00:14:06.494 9.265 - 9.326: 99.9212% ( 1) 00:14:06.494 16.091 - 16.213: 99.9272% ( 1) 00:14:06.494 3994.575 - 4025.783: 100.0000% ( 12) 00:14:06.494 00:14:06.494 Complete histogram 00:14:06.494 ================== 00:14:06.494 Range in us Cumulative Count 00:14:06.494 1.745 - 1.752: 0.0061% ( 1) 00:14:06.494 1.752 - 1.760: 0.0121% ( 1) 00:14:06.494 1.768 - 1.775: 0.0364% ( 4) 00:14:06.494 1.775 - 1.783: 0.1940% ( 26) 00:14:06.494 1.783 - 1.790: 0.6003% ( 67) 00:14:06.494 1.790 - 1.798: 1.2612% ( 109) 00:14:06.494 1.798 - 1.806: 1.9282% ( 110) 00:14:06.494 1.806 - 1.813: 2.4618% ( 88) 00:14:06.494 1.813 - 1.821: 3.3531% ( 147) 00:14:06.494 1.821 - 1.829: 8.9619% ( 925) 00:14:06.494 1.829 - 1.836: 28.3653% ( 3200) 00:14:06.494 1.836 - 1.844: 58.3313% ( 4942) 00:14:06.494 1.844 - 1.851: 80.1722% ( 3602) 00:14:06.494 1.851 - 1.859: 90.3468% ( 1678) 00:14:06.494 1.859 - 1.867: 95.1067% ( 785) 00:14:06.494 1.867 - 1.874: 97.4654% ( 389) 00:14:06.494 1.874 - 1.882: 98.3083% ( 139) 00:14:06.494 1.882 - 1.890: 98.7327% ( 70) 00:14:06.494 1.890 - 1.897: 98.8782% ( 24) 00:14:06.494 1.897 - 1.905: 98.9874% ( 18) 00:14:06.494 1.905 - 1.912: 99.1511% ( 27) 00:14:06.494 1.912 - 1.920: 99.2239% ( 12) 00:14:06.494 1.920 - 1.928: 99.2360% ( 2) 00:14:06.494 1.928 - 1.935: 99.2421% ( 1) 00:14:06.494 1.935 - 1.943: 99.2481% ( 1) 00:14:06.494 1.950 - 1.966: 99.2542% ( 1) 00:14:06.494 1.966 - 1.981: 99.2602% ( 1) 00:14:06.494 1.981 - 1.996: 99.2663% ( 1) 00:14:06.494 1.996 - 2.011: 99.2724% ( 1) 00:14:06.494 2.057 - 2.072: 99.2784% ( 1) 00:14:06.494 2.088 - 2.103: 99.2845% ( 1) 00:14:06.494 2.103 - 2.118: 99.2906% ( 1) 00:14:06.494 2.210 - 2.225: 99.2966% ( 1) 00:14:06.494 2.225 - 2.240: 99.3027% ( 1) 00:14:06.494 3.794 - 3.810: 99.3088% ( 1) 00:14:06.494 3.886 - 3.901: 99.3148% ( 1) 00:14:06.494 4.023 - 4.053: 99.3209% ( 1) 00:14:06.494 4.053 - 4.084: 99.3269% ( 1) 00:14:06.494 4.206 - 4.236: 99.3330% ( 1) 00:14:06.494 4.267 - 4.297: 99.3391% ( 1) 00:14:06.494 4.541 - 4.571: 99.3451% ( 1) 00:14:06.494 4.571 - 4.602: 99.3512% ( 1) 00:14:06.494 4.602 - 4.632: 99.3573% ( 1) 00:14:06.494 4.632 - 4.663: 99.3633% ( 1) 00:14:06.494 4.663 - 4.693: 99.3694% ( 1) 00:14:06.494 4.785 - 4.815: 99.3815% ( 2) 00:14:06.494 4.846 - 4.876: 99.3876% ( 1) 00:14:06.494 4.968 - 4.998: 99.3936% ( 1) 00:14:06.494 5.029 - 5.059: 99.4058% ( 2) 00:14:06.494 5.090 - 5.120: 99.4118% ( 1) 00:14:06.494 5.181 - 5.211: 99.4300% ( 3) 00:14:06.494 5.211 - 5.242: 99.4361% ( 1) 00:14:06.494 5.364 - 5.394: 99.4422% ( 1) 00:14:06.494 5.516 - 5.547: 99.4482% ( 1) 00:14:06.494 5.699 - 5.730: 99.4543% ( 1) 00:14:06.494 6.126 - 6.156: 99.4664% ( 2) 00:14:06.494 6.156 - 6.187: 99.4725% ( 1) 00:14:06.494 6.309 - 6.339: 99.4785% ( 1) 00:14:06.494 6.674 - 6.705: 99.4846% ( 1) 00:14:06.494 6.766 - 6.796: 99.4907% ( 1) 00:14:06.494 6.918 - 6.949: 99.4967% ( 1) 00:14:06.494 6.979 - 7.010: 99.5028% ( 1) 00:14:06.494 7.497 - 7.528: 99.5089% ( 1) 00:14:06.494 7.528 - 7.558: 99.5149% ( 1) 00:14:06.494 8.107 - 8.168: 99.5210% ( 1) 00:14:06.494 10.179 - 10.240: 99.5270% ( 1) 00:14:06.494 3994.575 - 4025.783: 100.0000% ( 78) 00:14:06.494 00:14:06.494 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:06.494 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:06.494 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:06.494 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:06.494 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.754 [ 00:14:06.754 { 00:14:06.754 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.754 "subtype": "Discovery", 00:14:06.754 "listen_addresses": [], 00:14:06.754 "allow_any_host": true, 00:14:06.754 "hosts": [] 00:14:06.754 }, 00:14:06.754 { 00:14:06.754 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.754 "subtype": "NVMe", 00:14:06.754 "listen_addresses": [ 00:14:06.754 { 00:14:06.754 "trtype": "VFIOUSER", 00:14:06.754 "adrfam": "IPv4", 00:14:06.754 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.754 "trsvcid": "0" 00:14:06.754 } 00:14:06.754 ], 00:14:06.754 "allow_any_host": true, 00:14:06.754 "hosts": [], 00:14:06.754 "serial_number": "SPDK1", 00:14:06.754 "model_number": "SPDK bdev Controller", 00:14:06.754 "max_namespaces": 32, 00:14:06.754 "min_cntlid": 1, 00:14:06.754 "max_cntlid": 65519, 00:14:06.754 "namespaces": [ 00:14:06.754 { 00:14:06.754 "nsid": 1, 00:14:06.754 "bdev_name": "Malloc1", 00:14:06.754 "name": "Malloc1", 00:14:06.754 "nguid": "915205BDDC5045F786FE2055863A4649", 00:14:06.754 "uuid": "915205bd-dc50-45f7-86fe-2055863a4649" 00:14:06.754 } 00:14:06.754 ] 00:14:06.754 }, 00:14:06.754 { 00:14:06.754 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.754 "subtype": "NVMe", 00:14:06.754 "listen_addresses": [ 00:14:06.754 { 00:14:06.754 "trtype": "VFIOUSER", 00:14:06.754 "adrfam": "IPv4", 00:14:06.754 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.754 "trsvcid": "0" 00:14:06.754 } 00:14:06.754 ], 00:14:06.754 "allow_any_host": true, 00:14:06.754 "hosts": [], 00:14:06.754 "serial_number": "SPDK2", 00:14:06.754 "model_number": "SPDK bdev Controller", 00:14:06.754 "max_namespaces": 32, 00:14:06.754 "min_cntlid": 1, 00:14:06.754 "max_cntlid": 65519, 00:14:06.754 "namespaces": [ 00:14:06.754 { 00:14:06.754 "nsid": 1, 00:14:06.754 "bdev_name": "Malloc2", 00:14:06.754 "name": "Malloc2", 00:14:06.754 "nguid": "493F000923EC4E0C8A967C077A873494", 00:14:06.754 "uuid": "493f0009-23ec-4e0c-8a96-7c077a873494" 00:14:06.754 } 00:14:06.754 ] 00:14:06.754 } 00:14:06.754 ] 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2460620 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:06.754 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:07.013 [2024-11-20 17:08:24.812367] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:07.013 Malloc3 00:14:07.013 17:08:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:07.013 [2024-11-20 17:08:25.048135] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:07.272 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:07.272 Asynchronous Event Request test 00:14:07.272 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:07.272 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:07.272 Registering asynchronous event callbacks... 00:14:07.272 Starting namespace attribute notice tests for all controllers... 00:14:07.272 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:07.272 aer_cb - Changed Namespace 00:14:07.272 Cleaning up... 00:14:07.272 [ 00:14:07.272 { 00:14:07.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.272 "subtype": "Discovery", 00:14:07.272 "listen_addresses": [], 00:14:07.272 "allow_any_host": true, 00:14:07.272 "hosts": [] 00:14:07.272 }, 00:14:07.272 { 00:14:07.272 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:07.272 "subtype": "NVMe", 00:14:07.272 "listen_addresses": [ 00:14:07.272 { 00:14:07.272 "trtype": "VFIOUSER", 00:14:07.272 "adrfam": "IPv4", 00:14:07.272 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:07.272 "trsvcid": "0" 00:14:07.272 } 00:14:07.272 ], 00:14:07.272 "allow_any_host": true, 00:14:07.272 "hosts": [], 00:14:07.272 "serial_number": "SPDK1", 00:14:07.272 "model_number": "SPDK bdev Controller", 00:14:07.272 "max_namespaces": 32, 00:14:07.272 "min_cntlid": 1, 00:14:07.272 "max_cntlid": 65519, 00:14:07.272 "namespaces": [ 00:14:07.272 { 00:14:07.272 "nsid": 1, 00:14:07.272 "bdev_name": "Malloc1", 00:14:07.272 "name": "Malloc1", 00:14:07.272 "nguid": "915205BDDC5045F786FE2055863A4649", 00:14:07.272 "uuid": "915205bd-dc50-45f7-86fe-2055863a4649" 00:14:07.272 }, 00:14:07.272 { 00:14:07.272 "nsid": 2, 00:14:07.272 "bdev_name": "Malloc3", 00:14:07.272 "name": "Malloc3", 00:14:07.272 "nguid": "2B4D388433CB4A2DB49BAE8E7BBB5889", 00:14:07.272 "uuid": "2b4d3884-33cb-4a2d-b49b-ae8e7bbb5889" 00:14:07.272 } 00:14:07.272 ] 00:14:07.272 }, 00:14:07.272 { 00:14:07.272 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:07.272 "subtype": "NVMe", 00:14:07.272 "listen_addresses": [ 00:14:07.272 { 00:14:07.272 "trtype": "VFIOUSER", 00:14:07.272 "adrfam": "IPv4", 00:14:07.272 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:07.272 "trsvcid": "0" 00:14:07.272 } 00:14:07.272 ], 00:14:07.272 "allow_any_host": true, 00:14:07.272 "hosts": [], 00:14:07.272 "serial_number": "SPDK2", 00:14:07.272 "model_number": "SPDK bdev Controller", 00:14:07.272 "max_namespaces": 32, 00:14:07.272 "min_cntlid": 1, 00:14:07.272 "max_cntlid": 65519, 00:14:07.272 "namespaces": [ 00:14:07.272 { 00:14:07.272 "nsid": 1, 00:14:07.272 "bdev_name": "Malloc2", 00:14:07.272 "name": "Malloc2", 00:14:07.272 "nguid": "493F000923EC4E0C8A967C077A873494", 00:14:07.272 "uuid": "493f0009-23ec-4e0c-8a96-7c077a873494" 00:14:07.272 } 00:14:07.273 ] 00:14:07.273 } 00:14:07.273 ] 00:14:07.273 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2460620 00:14:07.273 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:07.273 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:07.273 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:07.273 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:07.273 [2024-11-20 17:08:25.286848] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:14:07.273 [2024-11-20 17:08:25.286884] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460643 ] 00:14:07.533 [2024-11-20 17:08:25.323180] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:07.533 [2024-11-20 17:08:25.335433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:07.533 [2024-11-20 17:08:25.335458] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faeafb8b000 00:14:07.533 [2024-11-20 17:08:25.336438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.337444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.338454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.339455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.340459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.341466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.342474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.343478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:07.533 [2024-11-20 17:08:25.344496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:07.533 [2024-11-20 17:08:25.344507] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faeafb80000 00:14:07.533 [2024-11-20 17:08:25.345425] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:07.533 [2024-11-20 17:08:25.359476] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:07.533 [2024-11-20 17:08:25.359501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:07.533 [2024-11-20 17:08:25.361565] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:07.533 [2024-11-20 17:08:25.361603] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:07.533 [2024-11-20 17:08:25.361669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:07.533 [2024-11-20 17:08:25.361682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:07.533 [2024-11-20 17:08:25.361687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:07.534 [2024-11-20 17:08:25.362568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:07.534 [2024-11-20 17:08:25.362577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:07.534 [2024-11-20 17:08:25.362584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:07.534 [2024-11-20 17:08:25.363579] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:07.534 [2024-11-20 17:08:25.363589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:07.534 [2024-11-20 17:08:25.363595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.364585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:07.534 [2024-11-20 17:08:25.364593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.365598] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:07.534 [2024-11-20 17:08:25.365607] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:07.534 [2024-11-20 17:08:25.365611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.365620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.365727] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:07.534 [2024-11-20 17:08:25.365731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.365736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:07.534 [2024-11-20 17:08:25.366601] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:07.534 [2024-11-20 17:08:25.367607] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:07.534 [2024-11-20 17:08:25.368617] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:07.534 [2024-11-20 17:08:25.369620] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.534 [2024-11-20 17:08:25.369659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:07.534 [2024-11-20 17:08:25.370626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:07.534 [2024-11-20 17:08:25.370635] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:07.534 [2024-11-20 17:08:25.370639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.370656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:07.534 [2024-11-20 17:08:25.370666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.370678] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.534 [2024-11-20 17:08:25.370682] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.534 [2024-11-20 17:08:25.370686] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.534 [2024-11-20 17:08:25.370697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.381226] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:07.534 [2024-11-20 17:08:25.381230] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:07.534 [2024-11-20 17:08:25.381234] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:07.534 [2024-11-20 17:08:25.381238] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:07.534 [2024-11-20 17:08:25.381245] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:07.534 [2024-11-20 17:08:25.381249] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:07.534 [2024-11-20 17:08:25.381256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.381264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.381274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.389210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.389222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.534 [2024-11-20 17:08:25.389230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.534 [2024-11-20 17:08:25.389237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.534 [2024-11-20 17:08:25.389244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.534 [2024-11-20 17:08:25.389248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.389255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.389262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.397220] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:07.534 [2024-11-20 17:08:25.397225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.397231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.397236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.397244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.405209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.405263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.405270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.405277] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:07.534 [2024-11-20 17:08:25.405281] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:07.534 [2024-11-20 17:08:25.405284] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.534 [2024-11-20 17:08:25.405290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.413211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.413226] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:07.534 [2024-11-20 17:08:25.413234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.413241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.413247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.534 [2024-11-20 17:08:25.413251] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.534 [2024-11-20 17:08:25.413254] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.534 [2024-11-20 17:08:25.413260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.421209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.421224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.421232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.421238] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:07.534 [2024-11-20 17:08:25.421242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.534 [2024-11-20 17:08:25.421245] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.534 [2024-11-20 17:08:25.421251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.534 [2024-11-20 17:08:25.429207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:07.534 [2024-11-20 17:08:25.429216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.429222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:07.534 [2024-11-20 17:08:25.429229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:07.535 [2024-11-20 17:08:25.429235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:07.535 [2024-11-20 17:08:25.429239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:07.535 [2024-11-20 17:08:25.429244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:07.535 [2024-11-20 17:08:25.429248] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:07.535 [2024-11-20 17:08:25.429252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:07.535 [2024-11-20 17:08:25.429256] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:07.535 [2024-11-20 17:08:25.429271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.437211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.437224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.445210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.445223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.453219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.461208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.461223] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:07.535 [2024-11-20 17:08:25.461227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:07.535 [2024-11-20 17:08:25.461230] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:07.535 [2024-11-20 17:08:25.461233] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:07.535 [2024-11-20 17:08:25.461237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:07.535 [2024-11-20 17:08:25.461243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:07.535 [2024-11-20 17:08:25.461249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:07.535 [2024-11-20 17:08:25.461253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:07.535 [2024-11-20 17:08:25.461256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.535 [2024-11-20 17:08:25.461261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.461267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:07.535 [2024-11-20 17:08:25.461271] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:07.535 [2024-11-20 17:08:25.461274] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.535 [2024-11-20 17:08:25.461279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.461286] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:07.535 [2024-11-20 17:08:25.461290] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:07.535 [2024-11-20 17:08:25.461293] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:07.535 [2024-11-20 17:08:25.461298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:07.535 [2024-11-20 17:08:25.469209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.469226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.469235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:07.535 [2024-11-20 17:08:25.469242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:07.535 ===================================================== 00:14:07.535 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.535 ===================================================== 00:14:07.535 Controller Capabilities/Features 00:14:07.535 ================================ 00:14:07.535 Vendor ID: 4e58 00:14:07.535 Subsystem Vendor ID: 4e58 00:14:07.535 Serial Number: SPDK2 00:14:07.535 Model Number: SPDK bdev Controller 00:14:07.535 Firmware Version: 25.01 00:14:07.535 Recommended Arb Burst: 6 00:14:07.535 IEEE OUI Identifier: 8d 6b 50 00:14:07.535 Multi-path I/O 00:14:07.535 May have multiple subsystem ports: Yes 00:14:07.535 May have multiple controllers: Yes 00:14:07.535 Associated with SR-IOV VF: No 00:14:07.535 Max Data Transfer Size: 131072 00:14:07.535 Max Number of Namespaces: 32 00:14:07.535 Max Number of I/O Queues: 127 00:14:07.535 NVMe Specification Version (VS): 1.3 00:14:07.535 NVMe Specification Version (Identify): 1.3 00:14:07.535 Maximum Queue Entries: 256 00:14:07.535 Contiguous Queues Required: Yes 00:14:07.535 Arbitration Mechanisms Supported 00:14:07.535 Weighted Round Robin: Not Supported 00:14:07.535 Vendor Specific: Not Supported 00:14:07.535 Reset Timeout: 15000 ms 00:14:07.535 Doorbell Stride: 4 bytes 00:14:07.535 NVM Subsystem Reset: Not Supported 00:14:07.535 Command Sets Supported 00:14:07.535 NVM Command Set: Supported 00:14:07.535 Boot Partition: Not Supported 00:14:07.535 Memory Page Size Minimum: 4096 bytes 00:14:07.535 Memory Page Size Maximum: 4096 bytes 00:14:07.535 Persistent Memory Region: Not Supported 00:14:07.535 Optional Asynchronous Events Supported 00:14:07.535 Namespace Attribute Notices: Supported 00:14:07.535 Firmware Activation Notices: Not Supported 00:14:07.535 ANA Change Notices: Not Supported 00:14:07.535 PLE Aggregate Log Change Notices: Not Supported 00:14:07.535 LBA Status Info Alert Notices: Not Supported 00:14:07.535 EGE Aggregate Log Change Notices: Not Supported 00:14:07.535 Normal NVM Subsystem Shutdown event: Not Supported 00:14:07.535 Zone Descriptor Change Notices: Not Supported 00:14:07.535 Discovery Log Change Notices: Not Supported 00:14:07.535 Controller Attributes 00:14:07.535 128-bit Host Identifier: Supported 00:14:07.535 Non-Operational Permissive Mode: Not Supported 00:14:07.535 NVM Sets: Not Supported 00:14:07.535 Read Recovery Levels: Not Supported 00:14:07.535 Endurance Groups: Not Supported 00:14:07.535 Predictable Latency Mode: Not Supported 00:14:07.535 Traffic Based Keep ALive: Not Supported 00:14:07.535 Namespace Granularity: Not Supported 00:14:07.535 SQ Associations: Not Supported 00:14:07.535 UUID List: Not Supported 00:14:07.535 Multi-Domain Subsystem: Not Supported 00:14:07.535 Fixed Capacity Management: Not Supported 00:14:07.535 Variable Capacity Management: Not Supported 00:14:07.535 Delete Endurance Group: Not Supported 00:14:07.535 Delete NVM Set: Not Supported 00:14:07.535 Extended LBA Formats Supported: Not Supported 00:14:07.535 Flexible Data Placement Supported: Not Supported 00:14:07.535 00:14:07.535 Controller Memory Buffer Support 00:14:07.535 ================================ 00:14:07.535 Supported: No 00:14:07.535 00:14:07.535 Persistent Memory Region Support 00:14:07.535 ================================ 00:14:07.535 Supported: No 00:14:07.535 00:14:07.535 Admin Command Set Attributes 00:14:07.535 ============================ 00:14:07.535 Security Send/Receive: Not Supported 00:14:07.535 Format NVM: Not Supported 00:14:07.535 Firmware Activate/Download: Not Supported 00:14:07.535 Namespace Management: Not Supported 00:14:07.535 Device Self-Test: Not Supported 00:14:07.535 Directives: Not Supported 00:14:07.535 NVMe-MI: Not Supported 00:14:07.535 Virtualization Management: Not Supported 00:14:07.535 Doorbell Buffer Config: Not Supported 00:14:07.535 Get LBA Status Capability: Not Supported 00:14:07.535 Command & Feature Lockdown Capability: Not Supported 00:14:07.535 Abort Command Limit: 4 00:14:07.535 Async Event Request Limit: 4 00:14:07.535 Number of Firmware Slots: N/A 00:14:07.535 Firmware Slot 1 Read-Only: N/A 00:14:07.535 Firmware Activation Without Reset: N/A 00:14:07.535 Multiple Update Detection Support: N/A 00:14:07.535 Firmware Update Granularity: No Information Provided 00:14:07.535 Per-Namespace SMART Log: No 00:14:07.535 Asymmetric Namespace Access Log Page: Not Supported 00:14:07.535 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:07.535 Command Effects Log Page: Supported 00:14:07.535 Get Log Page Extended Data: Supported 00:14:07.535 Telemetry Log Pages: Not Supported 00:14:07.535 Persistent Event Log Pages: Not Supported 00:14:07.535 Supported Log Pages Log Page: May Support 00:14:07.535 Commands Supported & Effects Log Page: Not Supported 00:14:07.535 Feature Identifiers & Effects Log Page:May Support 00:14:07.535 NVMe-MI Commands & Effects Log Page: May Support 00:14:07.535 Data Area 4 for Telemetry Log: Not Supported 00:14:07.535 Error Log Page Entries Supported: 128 00:14:07.535 Keep Alive: Supported 00:14:07.536 Keep Alive Granularity: 10000 ms 00:14:07.536 00:14:07.536 NVM Command Set Attributes 00:14:07.536 ========================== 00:14:07.536 Submission Queue Entry Size 00:14:07.536 Max: 64 00:14:07.536 Min: 64 00:14:07.536 Completion Queue Entry Size 00:14:07.536 Max: 16 00:14:07.536 Min: 16 00:14:07.536 Number of Namespaces: 32 00:14:07.536 Compare Command: Supported 00:14:07.536 Write Uncorrectable Command: Not Supported 00:14:07.536 Dataset Management Command: Supported 00:14:07.536 Write Zeroes Command: Supported 00:14:07.536 Set Features Save Field: Not Supported 00:14:07.536 Reservations: Not Supported 00:14:07.536 Timestamp: Not Supported 00:14:07.536 Copy: Supported 00:14:07.536 Volatile Write Cache: Present 00:14:07.536 Atomic Write Unit (Normal): 1 00:14:07.536 Atomic Write Unit (PFail): 1 00:14:07.536 Atomic Compare & Write Unit: 1 00:14:07.536 Fused Compare & Write: Supported 00:14:07.536 Scatter-Gather List 00:14:07.536 SGL Command Set: Supported (Dword aligned) 00:14:07.536 SGL Keyed: Not Supported 00:14:07.536 SGL Bit Bucket Descriptor: Not Supported 00:14:07.536 SGL Metadata Pointer: Not Supported 00:14:07.536 Oversized SGL: Not Supported 00:14:07.536 SGL Metadata Address: Not Supported 00:14:07.536 SGL Offset: Not Supported 00:14:07.536 Transport SGL Data Block: Not Supported 00:14:07.536 Replay Protected Memory Block: Not Supported 00:14:07.536 00:14:07.536 Firmware Slot Information 00:14:07.536 ========================= 00:14:07.536 Active slot: 1 00:14:07.536 Slot 1 Firmware Revision: 25.01 00:14:07.536 00:14:07.536 00:14:07.536 Commands Supported and Effects 00:14:07.536 ============================== 00:14:07.536 Admin Commands 00:14:07.536 -------------- 00:14:07.536 Get Log Page (02h): Supported 00:14:07.536 Identify (06h): Supported 00:14:07.536 Abort (08h): Supported 00:14:07.536 Set Features (09h): Supported 00:14:07.536 Get Features (0Ah): Supported 00:14:07.536 Asynchronous Event Request (0Ch): Supported 00:14:07.536 Keep Alive (18h): Supported 00:14:07.536 I/O Commands 00:14:07.536 ------------ 00:14:07.536 Flush (00h): Supported LBA-Change 00:14:07.536 Write (01h): Supported LBA-Change 00:14:07.536 Read (02h): Supported 00:14:07.536 Compare (05h): Supported 00:14:07.536 Write Zeroes (08h): Supported LBA-Change 00:14:07.536 Dataset Management (09h): Supported LBA-Change 00:14:07.536 Copy (19h): Supported LBA-Change 00:14:07.536 00:14:07.536 Error Log 00:14:07.536 ========= 00:14:07.536 00:14:07.536 Arbitration 00:14:07.536 =========== 00:14:07.536 Arbitration Burst: 1 00:14:07.536 00:14:07.536 Power Management 00:14:07.536 ================ 00:14:07.536 Number of Power States: 1 00:14:07.536 Current Power State: Power State #0 00:14:07.536 Power State #0: 00:14:07.536 Max Power: 0.00 W 00:14:07.536 Non-Operational State: Operational 00:14:07.536 Entry Latency: Not Reported 00:14:07.536 Exit Latency: Not Reported 00:14:07.536 Relative Read Throughput: 0 00:14:07.536 Relative Read Latency: 0 00:14:07.536 Relative Write Throughput: 0 00:14:07.536 Relative Write Latency: 0 00:14:07.536 Idle Power: Not Reported 00:14:07.536 Active Power: Not Reported 00:14:07.536 Non-Operational Permissive Mode: Not Supported 00:14:07.536 00:14:07.536 Health Information 00:14:07.536 ================== 00:14:07.536 Critical Warnings: 00:14:07.536 Available Spare Space: OK 00:14:07.536 Temperature: OK 00:14:07.536 Device Reliability: OK 00:14:07.536 Read Only: No 00:14:07.536 Volatile Memory Backup: OK 00:14:07.536 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:07.536 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:07.536 Available Spare: 0% 00:14:07.536 Available Sp[2024-11-20 17:08:25.469329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:07.536 [2024-11-20 17:08:25.477208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:07.536 [2024-11-20 17:08:25.477236] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:07.536 [2024-11-20 17:08:25.477244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.536 [2024-11-20 17:08:25.477250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.536 [2024-11-20 17:08:25.477256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.536 [2024-11-20 17:08:25.477261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.536 [2024-11-20 17:08:25.477314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:07.536 [2024-11-20 17:08:25.477324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:07.536 [2024-11-20 17:08:25.478316] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.536 [2024-11-20 17:08:25.478359] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:07.536 [2024-11-20 17:08:25.478366] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:07.536 [2024-11-20 17:08:25.479321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:07.536 [2024-11-20 17:08:25.479333] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:07.536 [2024-11-20 17:08:25.479377] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:07.536 [2024-11-20 17:08:25.480337] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:07.536 are Threshold: 0% 00:14:07.536 Life Percentage Used: 0% 00:14:07.536 Data Units Read: 0 00:14:07.536 Data Units Written: 0 00:14:07.536 Host Read Commands: 0 00:14:07.536 Host Write Commands: 0 00:14:07.536 Controller Busy Time: 0 minutes 00:14:07.536 Power Cycles: 0 00:14:07.536 Power On Hours: 0 hours 00:14:07.536 Unsafe Shutdowns: 0 00:14:07.536 Unrecoverable Media Errors: 0 00:14:07.536 Lifetime Error Log Entries: 0 00:14:07.536 Warning Temperature Time: 0 minutes 00:14:07.536 Critical Temperature Time: 0 minutes 00:14:07.536 00:14:07.536 Number of Queues 00:14:07.536 ================ 00:14:07.536 Number of I/O Submission Queues: 127 00:14:07.536 Number of I/O Completion Queues: 127 00:14:07.536 00:14:07.536 Active Namespaces 00:14:07.536 ================= 00:14:07.536 Namespace ID:1 00:14:07.536 Error Recovery Timeout: Unlimited 00:14:07.536 Command Set Identifier: NVM (00h) 00:14:07.536 Deallocate: Supported 00:14:07.536 Deallocated/Unwritten Error: Not Supported 00:14:07.536 Deallocated Read Value: Unknown 00:14:07.536 Deallocate in Write Zeroes: Not Supported 00:14:07.536 Deallocated Guard Field: 0xFFFF 00:14:07.536 Flush: Supported 00:14:07.536 Reservation: Supported 00:14:07.536 Namespace Sharing Capabilities: Multiple Controllers 00:14:07.536 Size (in LBAs): 131072 (0GiB) 00:14:07.536 Capacity (in LBAs): 131072 (0GiB) 00:14:07.536 Utilization (in LBAs): 131072 (0GiB) 00:14:07.536 NGUID: 493F000923EC4E0C8A967C077A873494 00:14:07.536 UUID: 493f0009-23ec-4e0c-8a96-7c077a873494 00:14:07.536 Thin Provisioning: Not Supported 00:14:07.536 Per-NS Atomic Units: Yes 00:14:07.536 Atomic Boundary Size (Normal): 0 00:14:07.536 Atomic Boundary Size (PFail): 0 00:14:07.536 Atomic Boundary Offset: 0 00:14:07.536 Maximum Single Source Range Length: 65535 00:14:07.536 Maximum Copy Length: 65535 00:14:07.536 Maximum Source Range Count: 1 00:14:07.536 NGUID/EUI64 Never Reused: No 00:14:07.536 Namespace Write Protected: No 00:14:07.536 Number of LBA Formats: 1 00:14:07.536 Current LBA Format: LBA Format #00 00:14:07.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:07.536 00:14:07.536 17:08:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:07.797 [2024-11-20 17:08:25.708399] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.066 Initializing NVMe Controllers 00:14:13.066 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:13.066 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:13.066 Initialization complete. Launching workers. 00:14:13.067 ======================================================== 00:14:13.067 Latency(us) 00:14:13.067 Device Information : IOPS MiB/s Average min max 00:14:13.067 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39955.98 156.08 3203.83 947.67 8610.32 00:14:13.067 ======================================================== 00:14:13.067 Total : 39955.98 156.08 3203.83 947.67 8610.32 00:14:13.067 00:14:13.067 [2024-11-20 17:08:30.820456] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.067 17:08:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:13.067 [2024-11-20 17:08:31.053145] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.338 Initializing NVMe Controllers 00:14:18.338 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:18.338 Initialization complete. Launching workers. 00:14:18.338 ======================================================== 00:14:18.338 Latency(us) 00:14:18.339 Device Information : IOPS MiB/s Average min max 00:14:18.339 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39960.00 156.09 3203.44 969.21 10189.28 00:14:18.339 ======================================================== 00:14:18.339 Total : 39960.00 156.09 3203.44 969.21 10189.28 00:14:18.339 00:14:18.339 [2024-11-20 17:08:36.073839] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.339 17:08:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:18.339 [2024-11-20 17:08:36.274049] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:23.611 [2024-11-20 17:08:41.405302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:23.611 Initializing NVMe Controllers 00:14:23.611 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:23.611 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:23.611 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:23.611 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:23.611 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:23.611 Initialization complete. Launching workers. 00:14:23.611 Starting thread on core 2 00:14:23.611 Starting thread on core 3 00:14:23.611 Starting thread on core 1 00:14:23.611 17:08:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:23.870 [2024-11-20 17:08:41.705635] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.159 [2024-11-20 17:08:44.763504] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.159 Initializing NVMe Controllers 00:14:27.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:27.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:27.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:27.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:27.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:27.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:27.159 Initialization complete. Launching workers. 00:14:27.159 Starting thread on core 1 with urgent priority queue 00:14:27.159 Starting thread on core 2 with urgent priority queue 00:14:27.159 Starting thread on core 3 with urgent priority queue 00:14:27.159 Starting thread on core 0 with urgent priority queue 00:14:27.159 SPDK bdev Controller (SPDK2 ) core 0: 7245.00 IO/s 13.80 secs/100000 ios 00:14:27.159 SPDK bdev Controller (SPDK2 ) core 1: 7120.67 IO/s 14.04 secs/100000 ios 00:14:27.159 SPDK bdev Controller (SPDK2 ) core 2: 7749.00 IO/s 12.90 secs/100000 ios 00:14:27.159 SPDK bdev Controller (SPDK2 ) core 3: 7880.33 IO/s 12.69 secs/100000 ios 00:14:27.159 ======================================================== 00:14:27.159 00:14:27.159 17:08:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.159 [2024-11-20 17:08:45.047600] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.159 Initializing NVMe Controllers 00:14:27.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.159 Namespace ID: 1 size: 0GB 00:14:27.159 Initialization complete. 00:14:27.159 INFO: using host memory buffer for IO 00:14:27.159 Hello world! 00:14:27.159 [2024-11-20 17:08:45.056666] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.160 17:08:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:27.418 [2024-11-20 17:08:45.341241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:28.797 Initializing NVMe Controllers 00:14:28.797 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.797 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:28.797 Initialization complete. Launching workers. 00:14:28.797 submit (in ns) avg, min, max = 5562.4, 3189.5, 3999479.0 00:14:28.797 complete (in ns) avg, min, max = 23608.9, 1761.0, 4994982.9 00:14:28.797 00:14:28.797 Submit histogram 00:14:28.797 ================ 00:14:28.797 Range in us Cumulative Count 00:14:28.797 3.185 - 3.200: 0.0060% ( 1) 00:14:28.797 3.200 - 3.215: 0.0833% ( 13) 00:14:28.797 3.215 - 3.230: 0.2917% ( 35) 00:14:28.797 3.230 - 3.246: 0.5298% ( 40) 00:14:28.797 3.246 - 3.261: 1.2442% ( 120) 00:14:28.797 3.261 - 3.276: 4.1136% ( 482) 00:14:28.797 3.276 - 3.291: 9.7273% ( 943) 00:14:28.797 3.291 - 3.307: 15.4245% ( 957) 00:14:28.797 3.307 - 3.322: 22.5324% ( 1194) 00:14:28.797 3.322 - 3.337: 28.9677% ( 1081) 00:14:28.797 3.337 - 3.352: 34.3791% ( 909) 00:14:28.797 3.352 - 3.368: 40.0941% ( 960) 00:14:28.797 3.368 - 3.383: 46.5651% ( 1087) 00:14:28.797 3.383 - 3.398: 52.2860% ( 961) 00:14:28.797 3.398 - 3.413: 57.2806% ( 839) 00:14:28.797 3.413 - 3.429: 63.5909% ( 1060) 00:14:28.797 3.429 - 3.444: 70.8060% ( 1212) 00:14:28.797 3.444 - 3.459: 75.3721% ( 767) 00:14:28.797 3.459 - 3.474: 80.1107% ( 796) 00:14:28.797 3.474 - 3.490: 83.7064% ( 604) 00:14:28.797 3.490 - 3.505: 85.9686% ( 380) 00:14:28.797 3.505 - 3.520: 87.1770% ( 203) 00:14:28.797 3.520 - 3.535: 87.7545% ( 97) 00:14:28.797 3.535 - 3.550: 88.0700% ( 53) 00:14:28.797 3.550 - 3.566: 88.3915% ( 54) 00:14:28.797 3.566 - 3.581: 89.0523% ( 111) 00:14:28.797 3.581 - 3.596: 89.8559% ( 135) 00:14:28.797 3.596 - 3.611: 90.8263% ( 163) 00:14:28.797 3.611 - 3.627: 91.7133% ( 149) 00:14:28.797 3.627 - 3.642: 92.6241% ( 153) 00:14:28.797 3.642 - 3.657: 93.5588% ( 157) 00:14:28.797 3.657 - 3.672: 94.4398% ( 148) 00:14:28.797 3.672 - 3.688: 95.4995% ( 178) 00:14:28.797 3.688 - 3.703: 96.4401% ( 158) 00:14:28.797 3.703 - 3.718: 97.2020% ( 128) 00:14:28.797 3.718 - 3.733: 97.9759% ( 130) 00:14:28.797 3.733 - 3.749: 98.4165% ( 74) 00:14:28.797 3.749 - 3.764: 98.7260% ( 52) 00:14:28.797 3.764 - 3.779: 99.0058% ( 47) 00:14:28.797 3.779 - 3.794: 99.2321% ( 38) 00:14:28.797 3.794 - 3.810: 99.4464% ( 36) 00:14:28.797 3.810 - 3.825: 99.5535% ( 18) 00:14:28.797 3.825 - 3.840: 99.6011% ( 8) 00:14:28.797 3.840 - 3.855: 99.6250% ( 4) 00:14:28.797 3.855 - 3.870: 99.6428% ( 3) 00:14:28.797 3.870 - 3.886: 99.6488% ( 1) 00:14:28.797 5.120 - 5.150: 99.6547% ( 1) 00:14:28.797 5.181 - 5.211: 99.6607% ( 1) 00:14:28.797 5.242 - 5.272: 99.6666% ( 1) 00:14:28.797 5.394 - 5.425: 99.6726% ( 1) 00:14:28.797 5.425 - 5.455: 99.6785% ( 1) 00:14:28.797 5.516 - 5.547: 99.6845% ( 1) 00:14:28.797 5.608 - 5.638: 99.6904% ( 1) 00:14:28.797 5.669 - 5.699: 99.6964% ( 1) 00:14:28.797 5.699 - 5.730: 99.7023% ( 1) 00:14:28.797 5.760 - 5.790: 99.7083% ( 1) 00:14:28.797 5.821 - 5.851: 99.7143% ( 1) 00:14:28.797 5.851 - 5.882: 99.7202% ( 1) 00:14:28.797 5.882 - 5.912: 99.7262% ( 1) 00:14:28.797 5.943 - 5.973: 99.7321% ( 1) 00:14:28.797 6.187 - 6.217: 99.7381% ( 1) 00:14:28.797 6.217 - 6.248: 99.7500% ( 2) 00:14:28.797 6.248 - 6.278: 99.7559% ( 1) 00:14:28.797 6.278 - 6.309: 99.7619% ( 1) 00:14:28.797 6.309 - 6.339: 99.7678% ( 1) 00:14:28.797 6.370 - 6.400: 99.7738% ( 1) 00:14:28.797 6.400 - 6.430: 99.7797% ( 1) 00:14:28.797 6.461 - 6.491: 99.7857% ( 1) 00:14:28.797 6.491 - 6.522: 99.7916% ( 1) 00:14:28.797 6.522 - 6.552: 99.7976% ( 1) 00:14:28.797 6.644 - 6.674: 99.8035% ( 1) 00:14:28.797 6.674 - 6.705: 99.8095% ( 1) 00:14:28.797 6.766 - 6.796: 99.8155% ( 1) 00:14:28.797 6.796 - 6.827: 99.8214% ( 1) 00:14:28.797 7.010 - 7.040: 99.8274% ( 1) 00:14:28.797 7.070 - 7.101: 99.8333% ( 1) 00:14:28.797 7.101 - 7.131: 99.8393% ( 1) 00:14:28.797 7.131 - 7.162: 99.8452% ( 1) 00:14:28.797 7.253 - 7.284: 99.8512% ( 1) 00:14:28.797 [2024-11-20 17:08:46.437206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:28.797 7.406 - 7.436: 99.8571% ( 1) 00:14:28.797 7.436 - 7.467: 99.8631% ( 1) 00:14:28.797 7.528 - 7.558: 99.8690% ( 1) 00:14:28.797 7.589 - 7.619: 99.8750% ( 1) 00:14:28.797 7.741 - 7.771: 99.8809% ( 1) 00:14:28.797 7.771 - 7.802: 99.8869% ( 1) 00:14:28.797 7.924 - 7.985: 99.8928% ( 1) 00:14:28.797 8.107 - 8.168: 99.8988% ( 1) 00:14:28.797 8.168 - 8.229: 99.9048% ( 1) 00:14:28.797 8.290 - 8.350: 99.9107% ( 1) 00:14:28.797 8.777 - 8.838: 99.9167% ( 1) 00:14:28.797 9.204 - 9.265: 99.9226% ( 1) 00:14:28.797 9.326 - 9.387: 99.9286% ( 1) 00:14:28.797 9.509 - 9.570: 99.9345% ( 1) 00:14:28.797 11.215 - 11.276: 99.9405% ( 1) 00:14:28.797 13.044 - 13.105: 99.9464% ( 1) 00:14:28.797 3994.575 - 4025.783: 100.0000% ( 9) 00:14:28.797 00:14:28.797 Complete histogram 00:14:28.797 ================== 00:14:28.797 Range in us Cumulative Count 00:14:28.797 1.760 - 1.768: 0.0893% ( 15) 00:14:28.797 1.768 - 1.775: 0.5536% ( 78) 00:14:28.797 1.775 - 1.783: 1.3990% ( 142) 00:14:28.797 1.783 - 1.790: 2.5777% ( 198) 00:14:28.797 1.790 - 1.798: 3.3456% ( 129) 00:14:28.797 1.798 - 1.806: 3.8993% ( 93) 00:14:28.797 1.806 - 1.813: 4.3100% ( 69) 00:14:28.797 1.813 - 1.821: 5.9948% ( 283) 00:14:28.797 1.821 - 1.829: 20.8477% ( 2495) 00:14:28.797 1.829 - 1.836: 53.7862% ( 5533) 00:14:28.797 1.836 - 1.844: 79.4083% ( 4304) 00:14:28.797 1.844 - 1.851: 89.6773% ( 1725) 00:14:28.797 1.851 - 1.859: 93.0527% ( 567) 00:14:28.797 1.859 - 1.867: 95.1899% ( 359) 00:14:28.797 1.867 - 1.874: 96.3746% ( 199) 00:14:28.797 1.874 - 1.882: 96.8687% ( 83) 00:14:28.797 1.882 - 1.890: 97.1485% ( 47) 00:14:28.797 1.890 - 1.897: 97.4699% ( 54) 00:14:28.797 1.897 - 1.905: 98.0117% ( 91) 00:14:28.797 1.905 - 1.912: 98.5296% ( 87) 00:14:28.797 1.912 - 1.920: 98.8272% ( 50) 00:14:28.797 1.920 - 1.928: 99.0713% ( 41) 00:14:28.797 1.928 - 1.935: 99.1487% ( 13) 00:14:28.797 1.935 - 1.943: 99.1904% ( 7) 00:14:28.797 1.943 - 1.950: 99.2023% ( 2) 00:14:28.797 1.950 - 1.966: 99.2142% ( 2) 00:14:28.797 1.966 - 1.981: 99.2201% ( 1) 00:14:28.797 1.996 - 2.011: 99.2261% ( 1) 00:14:28.797 2.011 - 2.027: 99.2321% ( 1) 00:14:28.797 2.027 - 2.042: 99.2380% ( 1) 00:14:28.797 2.057 - 2.072: 99.2440% ( 1) 00:14:28.797 3.490 - 3.505: 99.2499% ( 1) 00:14:28.797 3.672 - 3.688: 99.2559% ( 1) 00:14:28.797 3.855 - 3.870: 99.2618% ( 1) 00:14:28.797 3.992 - 4.023: 99.2678% ( 1) 00:14:28.797 4.023 - 4.053: 99.2737% ( 1) 00:14:28.797 4.510 - 4.541: 99.2797% ( 1) 00:14:28.797 4.571 - 4.602: 99.2856% ( 1) 00:14:28.797 4.693 - 4.724: 99.2916% ( 1) 00:14:28.797 4.754 - 4.785: 99.2975% ( 1) 00:14:28.797 4.785 - 4.815: 99.3094% ( 2) 00:14:28.797 4.876 - 4.907: 99.3213% ( 2) 00:14:28.797 4.937 - 4.968: 99.3333% ( 2) 00:14:28.798 5.150 - 5.181: 99.3452% ( 2) 00:14:28.798 5.364 - 5.394: 99.3571% ( 2) 00:14:28.798 5.394 - 5.425: 99.3630% ( 1) 00:14:28.798 5.577 - 5.608: 99.3809% ( 3) 00:14:28.798 5.608 - 5.638: 99.3868% ( 1) 00:14:28.798 5.882 - 5.912: 99.3928% ( 1) 00:14:28.798 6.126 - 6.156: 99.3987% ( 1) 00:14:28.798 6.156 - 6.187: 99.4047% ( 1) 00:14:28.798 6.248 - 6.278: 99.4106% ( 1) 00:14:28.798 6.430 - 6.461: 99.4166% ( 1) 00:14:28.798 6.522 - 6.552: 99.4226% ( 1) 00:14:28.798 6.827 - 6.857: 99.4285% ( 1) 00:14:28.798 7.375 - 7.406: 99.4345% ( 1) 00:14:28.798 9.387 - 9.448: 99.4404% ( 1) 00:14:28.798 10.423 - 10.484: 99.4464% ( 1) 00:14:28.798 12.434 - 12.495: 99.4523% ( 1) 00:14:28.798 2995.931 - 3011.535: 99.4583% ( 1) 00:14:28.798 3011.535 - 3027.139: 99.4642% ( 1) 00:14:28.798 3058.347 - 3073.950: 99.4702% ( 1) 00:14:28.798 3994.575 - 4025.783: 99.9940% ( 88) 00:14:28.798 4993.219 - 5024.427: 100.0000% ( 1) 00:14:28.798 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:28.798 [ 00:14:28.798 { 00:14:28.798 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:28.798 "subtype": "Discovery", 00:14:28.798 "listen_addresses": [], 00:14:28.798 "allow_any_host": true, 00:14:28.798 "hosts": [] 00:14:28.798 }, 00:14:28.798 { 00:14:28.798 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:28.798 "subtype": "NVMe", 00:14:28.798 "listen_addresses": [ 00:14:28.798 { 00:14:28.798 "trtype": "VFIOUSER", 00:14:28.798 "adrfam": "IPv4", 00:14:28.798 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:28.798 "trsvcid": "0" 00:14:28.798 } 00:14:28.798 ], 00:14:28.798 "allow_any_host": true, 00:14:28.798 "hosts": [], 00:14:28.798 "serial_number": "SPDK1", 00:14:28.798 "model_number": "SPDK bdev Controller", 00:14:28.798 "max_namespaces": 32, 00:14:28.798 "min_cntlid": 1, 00:14:28.798 "max_cntlid": 65519, 00:14:28.798 "namespaces": [ 00:14:28.798 { 00:14:28.798 "nsid": 1, 00:14:28.798 "bdev_name": "Malloc1", 00:14:28.798 "name": "Malloc1", 00:14:28.798 "nguid": "915205BDDC5045F786FE2055863A4649", 00:14:28.798 "uuid": "915205bd-dc50-45f7-86fe-2055863a4649" 00:14:28.798 }, 00:14:28.798 { 00:14:28.798 "nsid": 2, 00:14:28.798 "bdev_name": "Malloc3", 00:14:28.798 "name": "Malloc3", 00:14:28.798 "nguid": "2B4D388433CB4A2DB49BAE8E7BBB5889", 00:14:28.798 "uuid": "2b4d3884-33cb-4a2d-b49b-ae8e7bbb5889" 00:14:28.798 } 00:14:28.798 ] 00:14:28.798 }, 00:14:28.798 { 00:14:28.798 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:28.798 "subtype": "NVMe", 00:14:28.798 "listen_addresses": [ 00:14:28.798 { 00:14:28.798 "trtype": "VFIOUSER", 00:14:28.798 "adrfam": "IPv4", 00:14:28.798 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:28.798 "trsvcid": "0" 00:14:28.798 } 00:14:28.798 ], 00:14:28.798 "allow_any_host": true, 00:14:28.798 "hosts": [], 00:14:28.798 "serial_number": "SPDK2", 00:14:28.798 "model_number": "SPDK bdev Controller", 00:14:28.798 "max_namespaces": 32, 00:14:28.798 "min_cntlid": 1, 00:14:28.798 "max_cntlid": 65519, 00:14:28.798 "namespaces": [ 00:14:28.798 { 00:14:28.798 "nsid": 1, 00:14:28.798 "bdev_name": "Malloc2", 00:14:28.798 "name": "Malloc2", 00:14:28.798 "nguid": "493F000923EC4E0C8A967C077A873494", 00:14:28.798 "uuid": "493f0009-23ec-4e0c-8a96-7c077a873494" 00:14:28.798 } 00:14:28.798 ] 00:14:28.798 } 00:14:28.798 ] 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2464315 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:28.798 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:29.057 [2024-11-20 17:08:46.850580] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.057 Malloc4 00:14:29.057 17:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:29.057 [2024-11-20 17:08:47.092437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:29.316 Asynchronous Event Request test 00:14:29.316 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.316 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:29.316 Registering asynchronous event callbacks... 00:14:29.316 Starting namespace attribute notice tests for all controllers... 00:14:29.316 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:29.316 aer_cb - Changed Namespace 00:14:29.316 Cleaning up... 00:14:29.316 [ 00:14:29.316 { 00:14:29.316 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.316 "subtype": "Discovery", 00:14:29.316 "listen_addresses": [], 00:14:29.316 "allow_any_host": true, 00:14:29.316 "hosts": [] 00:14:29.316 }, 00:14:29.316 { 00:14:29.316 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.316 "subtype": "NVMe", 00:14:29.316 "listen_addresses": [ 00:14:29.316 { 00:14:29.316 "trtype": "VFIOUSER", 00:14:29.316 "adrfam": "IPv4", 00:14:29.316 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.316 "trsvcid": "0" 00:14:29.316 } 00:14:29.316 ], 00:14:29.316 "allow_any_host": true, 00:14:29.316 "hosts": [], 00:14:29.316 "serial_number": "SPDK1", 00:14:29.316 "model_number": "SPDK bdev Controller", 00:14:29.316 "max_namespaces": 32, 00:14:29.316 "min_cntlid": 1, 00:14:29.316 "max_cntlid": 65519, 00:14:29.316 "namespaces": [ 00:14:29.316 { 00:14:29.316 "nsid": 1, 00:14:29.316 "bdev_name": "Malloc1", 00:14:29.316 "name": "Malloc1", 00:14:29.316 "nguid": "915205BDDC5045F786FE2055863A4649", 00:14:29.316 "uuid": "915205bd-dc50-45f7-86fe-2055863a4649" 00:14:29.316 }, 00:14:29.316 { 00:14:29.316 "nsid": 2, 00:14:29.316 "bdev_name": "Malloc3", 00:14:29.316 "name": "Malloc3", 00:14:29.316 "nguid": "2B4D388433CB4A2DB49BAE8E7BBB5889", 00:14:29.316 "uuid": "2b4d3884-33cb-4a2d-b49b-ae8e7bbb5889" 00:14:29.316 } 00:14:29.316 ] 00:14:29.316 }, 00:14:29.316 { 00:14:29.316 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.316 "subtype": "NVMe", 00:14:29.316 "listen_addresses": [ 00:14:29.316 { 00:14:29.316 "trtype": "VFIOUSER", 00:14:29.316 "adrfam": "IPv4", 00:14:29.316 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.316 "trsvcid": "0" 00:14:29.316 } 00:14:29.316 ], 00:14:29.316 "allow_any_host": true, 00:14:29.316 "hosts": [], 00:14:29.316 "serial_number": "SPDK2", 00:14:29.316 "model_number": "SPDK bdev Controller", 00:14:29.316 "max_namespaces": 32, 00:14:29.316 "min_cntlid": 1, 00:14:29.316 "max_cntlid": 65519, 00:14:29.316 "namespaces": [ 00:14:29.316 { 00:14:29.316 "nsid": 1, 00:14:29.316 "bdev_name": "Malloc2", 00:14:29.316 "name": "Malloc2", 00:14:29.316 "nguid": "493F000923EC4E0C8A967C077A873494", 00:14:29.316 "uuid": "493f0009-23ec-4e0c-8a96-7c077a873494" 00:14:29.316 }, 00:14:29.316 { 00:14:29.316 "nsid": 2, 00:14:29.316 "bdev_name": "Malloc4", 00:14:29.316 "name": "Malloc4", 00:14:29.316 "nguid": "D9B534D918AF485DBE9A377ADA961302", 00:14:29.316 "uuid": "d9b534d9-18af-485d-be9a-377ada961302" 00:14:29.316 } 00:14:29.316 ] 00:14:29.316 } 00:14:29.316 ] 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2464315 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2456642 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2456642 ']' 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2456642 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456642 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456642' 00:14:29.316 killing process with pid 2456642 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2456642 00:14:29.316 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2456642 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2464341 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2464341' 00:14:29.576 Process pid: 2464341 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2464341 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2464341 ']' 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.576 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:29.836 [2024-11-20 17:08:47.628575] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:29.836 [2024-11-20 17:08:47.629463] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:14:29.836 [2024-11-20 17:08:47.629506] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.836 [2024-11-20 17:08:47.707274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.836 [2024-11-20 17:08:47.748768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.836 [2024-11-20 17:08:47.748810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.836 [2024-11-20 17:08:47.748817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.836 [2024-11-20 17:08:47.748824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.836 [2024-11-20 17:08:47.748829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.836 [2024-11-20 17:08:47.753222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.836 [2024-11-20 17:08:47.753254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.836 [2024-11-20 17:08:47.753367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.836 [2024-11-20 17:08:47.753367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.836 [2024-11-20 17:08:47.821534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:29.836 [2024-11-20 17:08:47.821896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:29.836 [2024-11-20 17:08:47.822411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:29.836 [2024-11-20 17:08:47.822788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:29.836 [2024-11-20 17:08:47.822832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:29.836 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.836 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:29.836 17:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:31.215 17:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:31.215 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:31.215 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:31.215 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.215 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:31.215 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.474 Malloc1 00:14:31.474 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:31.474 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:31.733 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:31.991 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.991 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:31.991 17:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.250 Malloc2 00:14:32.250 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:32.508 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:32.508 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2464341 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2464341 ']' 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2464341 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464341 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464341' 00:14:32.767 killing process with pid 2464341 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2464341 00:14:32.767 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2464341 00:14:33.026 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:33.026 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:33.026 00:14:33.026 real 0m50.789s 00:14:33.026 user 3m16.443s 00:14:33.026 sys 0m3.187s 00:14:33.026 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.026 17:08:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:33.026 ************************************ 00:14:33.026 END TEST nvmf_vfio_user 00:14:33.026 ************************************ 00:14:33.026 17:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:33.026 17:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:33.026 17:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.026 17:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.027 ************************************ 00:14:33.027 START TEST nvmf_vfio_user_nvme_compliance 00:14:33.027 ************************************ 00:14:33.027 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:33.286 * Looking for test storage... 00:14:33.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.286 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:33.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.287 --rc genhtml_branch_coverage=1 00:14:33.287 --rc genhtml_function_coverage=1 00:14:33.287 --rc genhtml_legend=1 00:14:33.287 --rc geninfo_all_blocks=1 00:14:33.287 --rc geninfo_unexecuted_blocks=1 00:14:33.287 00:14:33.287 ' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:33.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.287 --rc genhtml_branch_coverage=1 00:14:33.287 --rc genhtml_function_coverage=1 00:14:33.287 --rc genhtml_legend=1 00:14:33.287 --rc geninfo_all_blocks=1 00:14:33.287 --rc geninfo_unexecuted_blocks=1 00:14:33.287 00:14:33.287 ' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:33.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.287 --rc genhtml_branch_coverage=1 00:14:33.287 --rc genhtml_function_coverage=1 00:14:33.287 --rc genhtml_legend=1 00:14:33.287 --rc geninfo_all_blocks=1 00:14:33.287 --rc geninfo_unexecuted_blocks=1 00:14:33.287 00:14:33.287 ' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:33.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.287 --rc genhtml_branch_coverage=1 00:14:33.287 --rc genhtml_function_coverage=1 00:14:33.287 --rc genhtml_legend=1 00:14:33.287 --rc geninfo_all_blocks=1 00:14:33.287 --rc geninfo_unexecuted_blocks=1 00:14:33.287 00:14:33.287 ' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:33.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2465103 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2465103' 00:14:33.287 Process pid: 2465103 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:33.287 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2465103 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2465103 ']' 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.288 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:33.288 [2024-11-20 17:08:51.297773] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:14:33.288 [2024-11-20 17:08:51.297826] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.547 [2024-11-20 17:08:51.371157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.547 [2024-11-20 17:08:51.409758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.547 [2024-11-20 17:08:51.409793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.547 [2024-11-20 17:08:51.409801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.547 [2024-11-20 17:08:51.409806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.547 [2024-11-20 17:08:51.409811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.547 [2024-11-20 17:08:51.411180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.547 [2024-11-20 17:08:51.411297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.547 [2024-11-20 17:08:51.411298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.547 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.547 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:33.547 17:08:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:34.483 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:34.483 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:34.483 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:34.483 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.483 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.742 malloc0 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.742 17:08:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:34.742 00:14:34.742 00:14:34.742 CUnit - A unit testing framework for C - Version 2.1-3 00:14:34.742 http://cunit.sourceforge.net/ 00:14:34.742 00:14:34.742 00:14:34.742 Suite: nvme_compliance 00:14:34.742 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 17:08:52.755967] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.742 [2024-11-20 17:08:52.757330] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:34.742 [2024-11-20 17:08:52.757347] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:34.742 [2024-11-20 17:08:52.757353] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:34.742 [2024-11-20 17:08:52.758985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.001 passed 00:14:35.001 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 17:08:52.836544] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.001 [2024-11-20 17:08:52.839561] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.001 passed 00:14:35.001 Test: admin_identify_ns ...[2024-11-20 17:08:52.918487] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.001 [2024-11-20 17:08:52.979220] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:35.001 [2024-11-20 17:08:52.987214] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:35.001 [2024-11-20 17:08:53.008313] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.001 passed 00:14:35.261 Test: admin_get_features_mandatory_features ...[2024-11-20 17:08:53.085017] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.261 [2024-11-20 17:08:53.088031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.261 passed 00:14:35.261 Test: admin_get_features_optional_features ...[2024-11-20 17:08:53.163543] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.261 [2024-11-20 17:08:53.166569] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.261 passed 00:14:35.261 Test: admin_set_features_number_of_queues ...[2024-11-20 17:08:53.245322] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.519 [2024-11-20 17:08:53.348295] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.519 passed 00:14:35.519 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 17:08:53.427926] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.519 [2024-11-20 17:08:53.430954] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.519 passed 00:14:35.519 Test: admin_get_log_page_with_lpo ...[2024-11-20 17:08:53.505607] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.778 [2024-11-20 17:08:53.574214] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:35.778 [2024-11-20 17:08:53.587296] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.778 passed 00:14:35.778 Test: fabric_property_get ...[2024-11-20 17:08:53.661132] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.778 [2024-11-20 17:08:53.662383] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:35.778 [2024-11-20 17:08:53.666167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.778 passed 00:14:35.778 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 17:08:53.740700] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.778 [2024-11-20 17:08:53.741931] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:35.778 [2024-11-20 17:08:53.743722] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.778 passed 00:14:36.037 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 17:08:53.822422] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.037 [2024-11-20 17:08:53.907210] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:36.037 [2024-11-20 17:08:53.923212] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:36.037 [2024-11-20 17:08:53.928282] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.037 passed 00:14:36.037 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 17:08:54.004017] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.037 [2024-11-20 17:08:54.005244] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:36.037 [2024-11-20 17:08:54.007034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.037 passed 00:14:36.296 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 17:08:54.081675] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.296 [2024-11-20 17:08:54.160211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:36.296 [2024-11-20 17:08:54.184216] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:36.296 [2024-11-20 17:08:54.189291] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.296 passed 00:14:36.296 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 17:08:54.263018] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.296 [2024-11-20 17:08:54.264267] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:36.296 [2024-11-20 17:08:54.264291] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:36.296 [2024-11-20 17:08:54.266041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.296 passed 00:14:36.555 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 17:08:54.342735] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.555 [2024-11-20 17:08:54.434213] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:36.555 [2024-11-20 17:08:54.442211] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:36.555 [2024-11-20 17:08:54.450214] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:36.555 [2024-11-20 17:08:54.458212] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:36.555 [2024-11-20 17:08:54.487299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.555 passed 00:14:36.555 Test: admin_create_io_sq_verify_pc ...[2024-11-20 17:08:54.563846] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.555 [2024-11-20 17:08:54.580215] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:36.813 [2024-11-20 17:08:54.598016] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.813 passed 00:14:36.813 Test: admin_create_io_qp_max_qps ...[2024-11-20 17:08:54.676536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:37.749 [2024-11-20 17:08:55.784216] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:38.316 [2024-11-20 17:08:56.176031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.316 passed 00:14:38.316 Test: admin_create_io_sq_shared_cq ...[2024-11-20 17:08:56.251977] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:38.575 [2024-11-20 17:08:56.384214] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:38.575 [2024-11-20 17:08:56.421269] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:38.575 passed 00:14:38.575 00:14:38.575 Run Summary: Type Total Ran Passed Failed Inactive 00:14:38.575 suites 1 1 n/a 0 0 00:14:38.575 tests 18 18 18 0 0 00:14:38.575 asserts 360 360 360 0 n/a 00:14:38.575 00:14:38.575 Elapsed time = 1.508 seconds 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2465103 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2465103 ']' 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2465103 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465103 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465103' 00:14:38.575 killing process with pid 2465103 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2465103 00:14:38.575 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2465103 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:38.835 00:14:38.835 real 0m5.662s 00:14:38.835 user 0m15.870s 00:14:38.835 sys 0m0.505s 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:38.835 ************************************ 00:14:38.835 END TEST nvmf_vfio_user_nvme_compliance 00:14:38.835 ************************************ 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.835 ************************************ 00:14:38.835 START TEST nvmf_vfio_user_fuzz 00:14:38.835 ************************************ 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:38.835 * Looking for test storage... 00:14:38.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.835 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:39.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.095 --rc genhtml_branch_coverage=1 00:14:39.095 --rc genhtml_function_coverage=1 00:14:39.095 --rc genhtml_legend=1 00:14:39.095 --rc geninfo_all_blocks=1 00:14:39.095 --rc geninfo_unexecuted_blocks=1 00:14:39.095 00:14:39.095 ' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:39.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.095 --rc genhtml_branch_coverage=1 00:14:39.095 --rc genhtml_function_coverage=1 00:14:39.095 --rc genhtml_legend=1 00:14:39.095 --rc geninfo_all_blocks=1 00:14:39.095 --rc geninfo_unexecuted_blocks=1 00:14:39.095 00:14:39.095 ' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:39.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.095 --rc genhtml_branch_coverage=1 00:14:39.095 --rc genhtml_function_coverage=1 00:14:39.095 --rc genhtml_legend=1 00:14:39.095 --rc geninfo_all_blocks=1 00:14:39.095 --rc geninfo_unexecuted_blocks=1 00:14:39.095 00:14:39.095 ' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:39.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.095 --rc genhtml_branch_coverage=1 00:14:39.095 --rc genhtml_function_coverage=1 00:14:39.095 --rc genhtml_legend=1 00:14:39.095 --rc geninfo_all_blocks=1 00:14:39.095 --rc geninfo_unexecuted_blocks=1 00:14:39.095 00:14:39.095 ' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.095 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:39.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2466087 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2466087' 00:14:39.096 Process pid: 2466087 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2466087 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2466087 ']' 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.096 17:08:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.355 17:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.355 17:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:39.355 17:08:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.290 malloc0 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:40.290 17:08:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:12.369 Fuzzing completed. Shutting down the fuzz application 00:15:12.369 00:15:12.369 Dumping successful admin opcodes: 00:15:12.369 8, 9, 10, 24, 00:15:12.369 Dumping successful io opcodes: 00:15:12.369 0, 00:15:12.369 NS: 0x20000081ef00 I/O qp, Total commands completed: 1149426, total successful commands: 4525, random_seed: 1038797504 00:15:12.369 NS: 0x20000081ef00 admin qp, Total commands completed: 284598, total successful commands: 2297, random_seed: 3328782080 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2466087 ']' 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466087' 00:15:12.369 killing process with pid 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2466087 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:12.369 00:15:12.369 real 0m32.224s 00:15:12.369 user 0m33.619s 00:15:12.369 sys 0m27.332s 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.369 17:09:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:12.369 ************************************ 00:15:12.369 END TEST nvmf_vfio_user_fuzz 00:15:12.369 ************************************ 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.369 ************************************ 00:15:12.369 START TEST nvmf_auth_target 00:15:12.369 ************************************ 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:12.369 * Looking for test storage... 00:15:12.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.369 --rc genhtml_branch_coverage=1 00:15:12.369 --rc genhtml_function_coverage=1 00:15:12.369 --rc genhtml_legend=1 00:15:12.369 --rc geninfo_all_blocks=1 00:15:12.369 --rc geninfo_unexecuted_blocks=1 00:15:12.369 00:15:12.369 ' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.369 --rc genhtml_branch_coverage=1 00:15:12.369 --rc genhtml_function_coverage=1 00:15:12.369 --rc genhtml_legend=1 00:15:12.369 --rc geninfo_all_blocks=1 00:15:12.369 --rc geninfo_unexecuted_blocks=1 00:15:12.369 00:15:12.369 ' 00:15:12.369 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.369 --rc genhtml_branch_coverage=1 00:15:12.369 --rc genhtml_function_coverage=1 00:15:12.369 --rc genhtml_legend=1 00:15:12.369 --rc geninfo_all_blocks=1 00:15:12.370 --rc geninfo_unexecuted_blocks=1 00:15:12.370 00:15:12.370 ' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.370 --rc genhtml_branch_coverage=1 00:15:12.370 --rc genhtml_function_coverage=1 00:15:12.370 --rc genhtml_legend=1 00:15:12.370 --rc geninfo_all_blocks=1 00:15:12.370 --rc geninfo_unexecuted_blocks=1 00:15:12.370 00:15:12.370 ' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.370 17:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.645 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:17.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:17.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:17.646 Found net devices under 0000:86:00.0: cvl_0_0 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:17.646 Found net devices under 0000:86:00.1: cvl_0_1 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.646 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:17.647 17:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:17.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:15:17.647 00:15:17.647 --- 10.0.0.2 ping statistics --- 00:15:17.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.647 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:15:17.647 00:15:17.647 --- 10.0.0.1 ping statistics --- 00:15:17.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.647 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2474905 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2474905 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2474905 ']' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2474987 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=40b89d8b806c9f854c8a8abcf6a7188bbe9d9a12d842e50c 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RF9 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 40b89d8b806c9f854c8a8abcf6a7188bbe9d9a12d842e50c 0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 40b89d8b806c9f854c8a8abcf6a7188bbe9d9a12d842e50c 0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=40b89d8b806c9f854c8a8abcf6a7188bbe9d9a12d842e50c 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RF9 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RF9 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.RF9 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da9f8b0d4715ffaaffca4fb2abcc8145dfecbf762116a332c97a4029cade5db5 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vng 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da9f8b0d4715ffaaffca4fb2abcc8145dfecbf762116a332c97a4029cade5db5 3 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da9f8b0d4715ffaaffca4fb2abcc8145dfecbf762116a332c97a4029cade5db5 3 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da9f8b0d4715ffaaffca4fb2abcc8145dfecbf762116a332c97a4029cade5db5 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:17.647 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vng 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vng 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vng 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c1f1a721723abdb880880ed12919261 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.leL 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c1f1a721723abdb880880ed12919261 1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c1f1a721723abdb880880ed12919261 1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c1f1a721723abdb880880ed12919261 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.leL 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.leL 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.leL 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c1d951c192cde68a6a48275034ab9720fc1639163f130d93 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xok 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c1d951c192cde68a6a48275034ab9720fc1639163f130d93 2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c1d951c192cde68a6a48275034ab9720fc1639163f130d93 2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c1d951c192cde68a6a48275034ab9720fc1639163f130d93 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xok 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xok 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.xok 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d1b68cdfff2deda3fd7a4d82822d3539a5a60651c488d639 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lOS 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d1b68cdfff2deda3fd7a4d82822d3539a5a60651c488d639 2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d1b68cdfff2deda3fd7a4d82822d3539a5a60651c488d639 2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d1b68cdfff2deda3fd7a4d82822d3539a5a60651c488d639 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lOS 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lOS 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lOS 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:17.906 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f5d4d23feab2ff1a1ff0ec25277720e7 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZdG 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f5d4d23feab2ff1a1ff0ec25277720e7 1 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f5d4d23feab2ff1a1ff0ec25277720e7 1 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f5d4d23feab2ff1a1ff0ec25277720e7 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZdG 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZdG 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZdG 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:17.907 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b198464864b749d2bf15719d6daab97e62ee1076de79fbbbd3eba55547dd8b3 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.z04 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b198464864b749d2bf15719d6daab97e62ee1076de79fbbbd3eba55547dd8b3 3 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b198464864b749d2bf15719d6daab97e62ee1076de79fbbbd3eba55547dd8b3 3 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b198464864b749d2bf15719d6daab97e62ee1076de79fbbbd3eba55547dd8b3 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.z04 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.z04 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.z04 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2474905 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2474905 ']' 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.165 17:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2474987 /var/tmp/host.sock 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2474987 ']' 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:18.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.165 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RF9 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RF9 00:15:18.424 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RF9 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vng ]] 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vng 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vng 00:15:18.683 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vng 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.leL 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.leL 00:15:18.942 17:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.leL 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.xok ]] 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xok 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xok 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xok 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lOS 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lOS 00:15:19.201 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lOS 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZdG ]] 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZdG 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZdG 00:15:19.460 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZdG 00:15:19.719 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:19.719 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z04 00:15:19.720 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.720 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.720 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.720 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.z04 00:15:19.720 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.z04 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.978 17:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.978 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.238 00:15:20.238 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.238 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.238 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.497 { 00:15:20.497 "cntlid": 1, 00:15:20.497 "qid": 0, 00:15:20.497 "state": "enabled", 00:15:20.497 "thread": "nvmf_tgt_poll_group_000", 00:15:20.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:20.497 "listen_address": { 00:15:20.497 "trtype": "TCP", 00:15:20.497 "adrfam": "IPv4", 00:15:20.497 "traddr": "10.0.0.2", 00:15:20.497 "trsvcid": "4420" 00:15:20.497 }, 00:15:20.497 "peer_address": { 00:15:20.497 "trtype": "TCP", 00:15:20.497 "adrfam": "IPv4", 00:15:20.497 "traddr": "10.0.0.1", 00:15:20.497 "trsvcid": "54264" 00:15:20.497 }, 00:15:20.497 "auth": { 00:15:20.497 "state": "completed", 00:15:20.497 "digest": "sha256", 00:15:20.497 "dhgroup": "null" 00:15:20.497 } 00:15:20.497 } 00:15:20.497 ]' 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.497 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:20.756 17:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:21.324 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.324 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.324 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.324 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.583 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.583 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.583 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.584 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.843 00:15:21.843 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.843 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.843 17:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.115 { 00:15:22.115 "cntlid": 3, 00:15:22.115 "qid": 0, 00:15:22.115 "state": "enabled", 00:15:22.115 "thread": "nvmf_tgt_poll_group_000", 00:15:22.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:22.115 "listen_address": { 00:15:22.115 "trtype": "TCP", 00:15:22.115 "adrfam": "IPv4", 00:15:22.115 "traddr": "10.0.0.2", 00:15:22.115 "trsvcid": "4420" 00:15:22.115 }, 00:15:22.115 "peer_address": { 00:15:22.115 "trtype": "TCP", 00:15:22.115 "adrfam": "IPv4", 00:15:22.115 "traddr": "10.0.0.1", 00:15:22.115 "trsvcid": "40738" 00:15:22.115 }, 00:15:22.115 "auth": { 00:15:22.115 "state": "completed", 00:15:22.115 "digest": "sha256", 00:15:22.115 "dhgroup": "null" 00:15:22.115 } 00:15:22.115 } 00:15:22.115 ]' 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:22.115 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.426 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.426 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.426 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.426 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:22.426 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.001 17:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.260 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.519 00:15:23.519 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.519 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.519 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.778 { 00:15:23.778 "cntlid": 5, 00:15:23.778 "qid": 0, 00:15:23.778 "state": "enabled", 00:15:23.778 "thread": "nvmf_tgt_poll_group_000", 00:15:23.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:23.778 "listen_address": { 00:15:23.778 "trtype": "TCP", 00:15:23.778 "adrfam": "IPv4", 00:15:23.778 "traddr": "10.0.0.2", 00:15:23.778 "trsvcid": "4420" 00:15:23.778 }, 00:15:23.778 "peer_address": { 00:15:23.778 "trtype": "TCP", 00:15:23.778 "adrfam": "IPv4", 00:15:23.778 "traddr": "10.0.0.1", 00:15:23.778 "trsvcid": "40760" 00:15:23.778 }, 00:15:23.778 "auth": { 00:15:23.778 "state": "completed", 00:15:23.778 "digest": "sha256", 00:15:23.778 "dhgroup": "null" 00:15:23.778 } 00:15:23.778 } 00:15:23.778 ]' 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.778 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.037 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:24.038 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:24.605 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.605 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.606 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.865 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.124 00:15:25.124 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.124 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.124 17:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.124 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.124 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.124 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.124 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.382 { 00:15:25.382 "cntlid": 7, 00:15:25.382 "qid": 0, 00:15:25.382 "state": "enabled", 00:15:25.382 "thread": "nvmf_tgt_poll_group_000", 00:15:25.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:25.382 "listen_address": { 00:15:25.382 "trtype": "TCP", 00:15:25.382 "adrfam": "IPv4", 00:15:25.382 "traddr": "10.0.0.2", 00:15:25.382 "trsvcid": "4420" 00:15:25.382 }, 00:15:25.382 "peer_address": { 00:15:25.382 "trtype": "TCP", 00:15:25.382 "adrfam": "IPv4", 00:15:25.382 "traddr": "10.0.0.1", 00:15:25.382 "trsvcid": "40788" 00:15:25.382 }, 00:15:25.382 "auth": { 00:15:25.382 "state": "completed", 00:15:25.382 "digest": "sha256", 00:15:25.382 "dhgroup": "null" 00:15:25.382 } 00:15:25.382 } 00:15:25.382 ]' 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.382 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.641 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:25.641 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.208 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.467 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.726 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.726 { 00:15:26.726 "cntlid": 9, 00:15:26.726 "qid": 0, 00:15:26.726 "state": "enabled", 00:15:26.726 "thread": "nvmf_tgt_poll_group_000", 00:15:26.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:26.726 "listen_address": { 00:15:26.726 "trtype": "TCP", 00:15:26.726 "adrfam": "IPv4", 00:15:26.726 "traddr": "10.0.0.2", 00:15:26.726 "trsvcid": "4420" 00:15:26.726 }, 00:15:26.726 "peer_address": { 00:15:26.726 "trtype": "TCP", 00:15:26.726 "adrfam": "IPv4", 00:15:26.726 "traddr": "10.0.0.1", 00:15:26.726 "trsvcid": "40804" 00:15:26.726 }, 00:15:26.726 "auth": { 00:15:26.726 "state": "completed", 00:15:26.726 "digest": "sha256", 00:15:26.726 "dhgroup": "ffdhe2048" 00:15:26.726 } 00:15:26.726 } 00:15:26.726 ]' 00:15:26.726 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.984 17:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.243 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:27.243 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.811 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.070 17:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.070 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.330 { 00:15:28.330 "cntlid": 11, 00:15:28.330 "qid": 0, 00:15:28.330 "state": "enabled", 00:15:28.330 "thread": "nvmf_tgt_poll_group_000", 00:15:28.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:28.330 "listen_address": { 00:15:28.330 "trtype": "TCP", 00:15:28.330 "adrfam": "IPv4", 00:15:28.330 "traddr": "10.0.0.2", 00:15:28.330 "trsvcid": "4420" 00:15:28.330 }, 00:15:28.330 "peer_address": { 00:15:28.330 "trtype": "TCP", 00:15:28.330 "adrfam": "IPv4", 00:15:28.330 "traddr": "10.0.0.1", 00:15:28.330 "trsvcid": "40828" 00:15:28.330 }, 00:15:28.330 "auth": { 00:15:28.330 "state": "completed", 00:15:28.330 "digest": "sha256", 00:15:28.330 "dhgroup": "ffdhe2048" 00:15:28.330 } 00:15:28.330 } 00:15:28.330 ]' 00:15:28.330 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.588 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.846 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:28.846 17:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.413 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.671 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.672 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.930 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.930 { 00:15:29.930 "cntlid": 13, 00:15:29.930 "qid": 0, 00:15:29.930 "state": "enabled", 00:15:29.930 "thread": "nvmf_tgt_poll_group_000", 00:15:29.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:29.930 "listen_address": { 00:15:29.930 "trtype": "TCP", 00:15:29.930 "adrfam": "IPv4", 00:15:29.930 "traddr": "10.0.0.2", 00:15:29.930 "trsvcid": "4420" 00:15:29.930 }, 00:15:29.930 "peer_address": { 00:15:29.930 "trtype": "TCP", 00:15:29.930 "adrfam": "IPv4", 00:15:29.930 "traddr": "10.0.0.1", 00:15:29.930 "trsvcid": "40866" 00:15:29.930 }, 00:15:29.930 "auth": { 00:15:29.930 "state": "completed", 00:15:29.930 "digest": "sha256", 00:15:29.930 "dhgroup": "ffdhe2048" 00:15:29.930 } 00:15:29.930 } 00:15:29.930 ]' 00:15:29.930 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.188 17:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.188 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.447 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:30.447 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.013 17:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.272 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.272 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.531 { 00:15:31.531 "cntlid": 15, 00:15:31.531 "qid": 0, 00:15:31.531 "state": "enabled", 00:15:31.531 "thread": "nvmf_tgt_poll_group_000", 00:15:31.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:31.531 "listen_address": { 00:15:31.531 "trtype": "TCP", 00:15:31.531 "adrfam": "IPv4", 00:15:31.531 "traddr": "10.0.0.2", 00:15:31.531 "trsvcid": "4420" 00:15:31.531 }, 00:15:31.531 "peer_address": { 00:15:31.531 "trtype": "TCP", 00:15:31.531 "adrfam": "IPv4", 00:15:31.531 "traddr": "10.0.0.1", 00:15:31.531 "trsvcid": "40882" 00:15:31.531 }, 00:15:31.531 "auth": { 00:15:31.531 "state": "completed", 00:15:31.531 "digest": "sha256", 00:15:31.531 "dhgroup": "ffdhe2048" 00:15:31.531 } 00:15:31.531 } 00:15:31.531 ]' 00:15:31.531 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.789 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.048 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:32.048 17:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.614 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.873 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.132 00:15:33.132 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.132 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.132 17:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.132 { 00:15:33.132 "cntlid": 17, 00:15:33.132 "qid": 0, 00:15:33.132 "state": "enabled", 00:15:33.132 "thread": "nvmf_tgt_poll_group_000", 00:15:33.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:33.132 "listen_address": { 00:15:33.132 "trtype": "TCP", 00:15:33.132 "adrfam": "IPv4", 00:15:33.132 "traddr": "10.0.0.2", 00:15:33.132 "trsvcid": "4420" 00:15:33.132 }, 00:15:33.132 "peer_address": { 00:15:33.132 "trtype": "TCP", 00:15:33.132 "adrfam": "IPv4", 00:15:33.132 "traddr": "10.0.0.1", 00:15:33.132 "trsvcid": "53934" 00:15:33.132 }, 00:15:33.132 "auth": { 00:15:33.132 "state": "completed", 00:15:33.132 "digest": "sha256", 00:15:33.132 "dhgroup": "ffdhe3072" 00:15:33.132 } 00:15:33.132 } 00:15:33.132 ]' 00:15:33.132 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.390 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.718 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:33.718 17:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:33.976 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.234 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.235 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.493 00:15:34.493 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.493 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.493 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.752 { 00:15:34.752 "cntlid": 19, 00:15:34.752 "qid": 0, 00:15:34.752 "state": "enabled", 00:15:34.752 "thread": "nvmf_tgt_poll_group_000", 00:15:34.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:34.752 "listen_address": { 00:15:34.752 "trtype": "TCP", 00:15:34.752 "adrfam": "IPv4", 00:15:34.752 "traddr": "10.0.0.2", 00:15:34.752 "trsvcid": "4420" 00:15:34.752 }, 00:15:34.752 "peer_address": { 00:15:34.752 "trtype": "TCP", 00:15:34.752 "adrfam": "IPv4", 00:15:34.752 "traddr": "10.0.0.1", 00:15:34.752 "trsvcid": "53942" 00:15:34.752 }, 00:15:34.752 "auth": { 00:15:34.752 "state": "completed", 00:15:34.752 "digest": "sha256", 00:15:34.752 "dhgroup": "ffdhe3072" 00:15:34.752 } 00:15:34.752 } 00:15:34.752 ]' 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.752 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.011 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.011 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.011 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.011 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.011 17:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.270 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:35.270 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.837 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.838 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.096 00:15:36.096 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.096 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.096 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.355 { 00:15:36.355 "cntlid": 21, 00:15:36.355 "qid": 0, 00:15:36.355 "state": "enabled", 00:15:36.355 "thread": "nvmf_tgt_poll_group_000", 00:15:36.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:36.355 "listen_address": { 00:15:36.355 "trtype": "TCP", 00:15:36.355 "adrfam": "IPv4", 00:15:36.355 "traddr": "10.0.0.2", 00:15:36.355 "trsvcid": "4420" 00:15:36.355 }, 00:15:36.355 "peer_address": { 00:15:36.355 "trtype": "TCP", 00:15:36.355 "adrfam": "IPv4", 00:15:36.355 "traddr": "10.0.0.1", 00:15:36.355 "trsvcid": "53956" 00:15:36.355 }, 00:15:36.355 "auth": { 00:15:36.355 "state": "completed", 00:15:36.355 "digest": "sha256", 00:15:36.355 "dhgroup": "ffdhe3072" 00:15:36.355 } 00:15:36.355 } 00:15:36.355 ]' 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.355 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:36.613 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:37.180 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.181 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.441 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.700 00:15:37.700 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.700 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.700 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.958 { 00:15:37.958 "cntlid": 23, 00:15:37.958 "qid": 0, 00:15:37.958 "state": "enabled", 00:15:37.958 "thread": "nvmf_tgt_poll_group_000", 00:15:37.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:37.958 "listen_address": { 00:15:37.958 "trtype": "TCP", 00:15:37.958 "adrfam": "IPv4", 00:15:37.958 "traddr": "10.0.0.2", 00:15:37.958 "trsvcid": "4420" 00:15:37.958 }, 00:15:37.958 "peer_address": { 00:15:37.958 "trtype": "TCP", 00:15:37.958 "adrfam": "IPv4", 00:15:37.958 "traddr": "10.0.0.1", 00:15:37.958 "trsvcid": "53986" 00:15:37.958 }, 00:15:37.958 "auth": { 00:15:37.958 "state": "completed", 00:15:37.958 "digest": "sha256", 00:15:37.958 "dhgroup": "ffdhe3072" 00:15:37.958 } 00:15:37.958 } 00:15:37.958 ]' 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.958 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.217 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.217 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.217 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.217 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:38.217 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:38.784 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.043 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.043 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.043 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.043 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.043 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.301 00:15:39.301 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.301 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.301 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.560 { 00:15:39.560 "cntlid": 25, 00:15:39.560 "qid": 0, 00:15:39.560 "state": "enabled", 00:15:39.560 "thread": "nvmf_tgt_poll_group_000", 00:15:39.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:39.560 "listen_address": { 00:15:39.560 "trtype": "TCP", 00:15:39.560 "adrfam": "IPv4", 00:15:39.560 "traddr": "10.0.0.2", 00:15:39.560 "trsvcid": "4420" 00:15:39.560 }, 00:15:39.560 "peer_address": { 00:15:39.560 "trtype": "TCP", 00:15:39.560 "adrfam": "IPv4", 00:15:39.560 "traddr": "10.0.0.1", 00:15:39.560 "trsvcid": "54006" 00:15:39.560 }, 00:15:39.560 "auth": { 00:15:39.560 "state": "completed", 00:15:39.560 "digest": "sha256", 00:15:39.560 "dhgroup": "ffdhe4096" 00:15:39.560 } 00:15:39.560 } 00:15:39.560 ]' 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.560 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.818 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.818 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.819 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.819 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:39.819 17:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.385 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.644 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.903 00:15:40.903 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.903 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.903 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.161 { 00:15:41.161 "cntlid": 27, 00:15:41.161 "qid": 0, 00:15:41.161 "state": "enabled", 00:15:41.161 "thread": "nvmf_tgt_poll_group_000", 00:15:41.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:41.161 "listen_address": { 00:15:41.161 "trtype": "TCP", 00:15:41.161 "adrfam": "IPv4", 00:15:41.161 "traddr": "10.0.0.2", 00:15:41.161 "trsvcid": "4420" 00:15:41.161 }, 00:15:41.161 "peer_address": { 00:15:41.161 "trtype": "TCP", 00:15:41.161 "adrfam": "IPv4", 00:15:41.161 "traddr": "10.0.0.1", 00:15:41.161 "trsvcid": "54032" 00:15:41.161 }, 00:15:41.161 "auth": { 00:15:41.161 "state": "completed", 00:15:41.161 "digest": "sha256", 00:15:41.161 "dhgroup": "ffdhe4096" 00:15:41.161 } 00:15:41.161 } 00:15:41.161 ]' 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.161 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:41.420 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:41.986 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.245 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.503 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.763 { 00:15:42.763 "cntlid": 29, 00:15:42.763 "qid": 0, 00:15:42.763 "state": "enabled", 00:15:42.763 "thread": "nvmf_tgt_poll_group_000", 00:15:42.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:42.763 "listen_address": { 00:15:42.763 "trtype": "TCP", 00:15:42.763 "adrfam": "IPv4", 00:15:42.763 "traddr": "10.0.0.2", 00:15:42.763 "trsvcid": "4420" 00:15:42.763 }, 00:15:42.763 "peer_address": { 00:15:42.763 "trtype": "TCP", 00:15:42.763 "adrfam": "IPv4", 00:15:42.763 "traddr": "10.0.0.1", 00:15:42.763 "trsvcid": "45774" 00:15:42.763 }, 00:15:42.763 "auth": { 00:15:42.763 "state": "completed", 00:15:42.763 "digest": "sha256", 00:15:42.763 "dhgroup": "ffdhe4096" 00:15:42.763 } 00:15:42.763 } 00:15:42.763 ]' 00:15:42.763 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.022 17:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.281 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:43.281 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.849 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.850 17:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.108 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.368 { 00:15:44.368 "cntlid": 31, 00:15:44.368 "qid": 0, 00:15:44.368 "state": "enabled", 00:15:44.368 "thread": "nvmf_tgt_poll_group_000", 00:15:44.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.368 "listen_address": { 00:15:44.368 "trtype": "TCP", 00:15:44.368 "adrfam": "IPv4", 00:15:44.368 "traddr": "10.0.0.2", 00:15:44.368 "trsvcid": "4420" 00:15:44.368 }, 00:15:44.368 "peer_address": { 00:15:44.368 "trtype": "TCP", 00:15:44.368 "adrfam": "IPv4", 00:15:44.368 "traddr": "10.0.0.1", 00:15:44.368 "trsvcid": "45796" 00:15:44.368 }, 00:15:44.368 "auth": { 00:15:44.368 "state": "completed", 00:15:44.368 "digest": "sha256", 00:15:44.368 "dhgroup": "ffdhe4096" 00:15:44.368 } 00:15:44.368 } 00:15:44.368 ]' 00:15:44.368 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.627 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.885 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:44.885 17:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.453 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.021 00:15:46.021 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.021 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.021 17:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.021 { 00:15:46.021 "cntlid": 33, 00:15:46.021 "qid": 0, 00:15:46.021 "state": "enabled", 00:15:46.021 "thread": "nvmf_tgt_poll_group_000", 00:15:46.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.021 "listen_address": { 00:15:46.021 "trtype": "TCP", 00:15:46.021 "adrfam": "IPv4", 00:15:46.021 "traddr": "10.0.0.2", 00:15:46.021 "trsvcid": "4420" 00:15:46.021 }, 00:15:46.021 "peer_address": { 00:15:46.021 "trtype": "TCP", 00:15:46.021 "adrfam": "IPv4", 00:15:46.021 "traddr": "10.0.0.1", 00:15:46.021 "trsvcid": "45818" 00:15:46.021 }, 00:15:46.021 "auth": { 00:15:46.021 "state": "completed", 00:15:46.021 "digest": "sha256", 00:15:46.021 "dhgroup": "ffdhe6144" 00:15:46.021 } 00:15:46.021 } 00:15:46.021 ]' 00:15:46.021 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.280 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.539 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:46.539 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.107 17:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.107 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.366 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.366 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.366 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.366 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.625 00:15:47.625 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.625 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.625 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.885 { 00:15:47.885 "cntlid": 35, 00:15:47.885 "qid": 0, 00:15:47.885 "state": "enabled", 00:15:47.885 "thread": "nvmf_tgt_poll_group_000", 00:15:47.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:47.885 "listen_address": { 00:15:47.885 "trtype": "TCP", 00:15:47.885 "adrfam": "IPv4", 00:15:47.885 "traddr": "10.0.0.2", 00:15:47.885 "trsvcid": "4420" 00:15:47.885 }, 00:15:47.885 "peer_address": { 00:15:47.885 "trtype": "TCP", 00:15:47.885 "adrfam": "IPv4", 00:15:47.885 "traddr": "10.0.0.1", 00:15:47.885 "trsvcid": "45832" 00:15:47.885 }, 00:15:47.885 "auth": { 00:15:47.885 "state": "completed", 00:15:47.885 "digest": "sha256", 00:15:47.885 "dhgroup": "ffdhe6144" 00:15:47.885 } 00:15:47.885 } 00:15:47.885 ]' 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.885 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.143 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:48.143 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:48.711 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.969 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.227 00:15:49.227 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.227 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.227 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.486 { 00:15:49.486 "cntlid": 37, 00:15:49.486 "qid": 0, 00:15:49.486 "state": "enabled", 00:15:49.486 "thread": "nvmf_tgt_poll_group_000", 00:15:49.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:49.486 "listen_address": { 00:15:49.486 "trtype": "TCP", 00:15:49.486 "adrfam": "IPv4", 00:15:49.486 "traddr": "10.0.0.2", 00:15:49.486 "trsvcid": "4420" 00:15:49.486 }, 00:15:49.486 "peer_address": { 00:15:49.486 "trtype": "TCP", 00:15:49.486 "adrfam": "IPv4", 00:15:49.486 "traddr": "10.0.0.1", 00:15:49.486 "trsvcid": "45860" 00:15:49.486 }, 00:15:49.486 "auth": { 00:15:49.486 "state": "completed", 00:15:49.486 "digest": "sha256", 00:15:49.486 "dhgroup": "ffdhe6144" 00:15:49.486 } 00:15:49.486 } 00:15:49.486 ]' 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.486 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.744 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:49.744 17:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.312 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:50.570 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:50.570 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.571 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.829 00:15:50.829 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.829 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.829 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.087 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.087 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.087 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.087 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.088 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.088 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.088 { 00:15:51.088 "cntlid": 39, 00:15:51.088 "qid": 0, 00:15:51.088 "state": "enabled", 00:15:51.088 "thread": "nvmf_tgt_poll_group_000", 00:15:51.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:51.088 "listen_address": { 00:15:51.088 "trtype": "TCP", 00:15:51.088 "adrfam": "IPv4", 00:15:51.088 "traddr": "10.0.0.2", 00:15:51.088 "trsvcid": "4420" 00:15:51.088 }, 00:15:51.088 "peer_address": { 00:15:51.088 "trtype": "TCP", 00:15:51.088 "adrfam": "IPv4", 00:15:51.088 "traddr": "10.0.0.1", 00:15:51.088 "trsvcid": "45902" 00:15:51.088 }, 00:15:51.088 "auth": { 00:15:51.088 "state": "completed", 00:15:51.088 "digest": "sha256", 00:15:51.088 "dhgroup": "ffdhe6144" 00:15:51.088 } 00:15:51.088 } 00:15:51.088 ]' 00:15:51.088 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.088 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.088 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:51.346 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:51.913 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.913 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.913 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.179 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.179 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.748 00:15:52.748 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.748 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.748 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.008 { 00:15:53.008 "cntlid": 41, 00:15:53.008 "qid": 0, 00:15:53.008 "state": "enabled", 00:15:53.008 "thread": "nvmf_tgt_poll_group_000", 00:15:53.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:53.008 "listen_address": { 00:15:53.008 "trtype": "TCP", 00:15:53.008 "adrfam": "IPv4", 00:15:53.008 "traddr": "10.0.0.2", 00:15:53.008 "trsvcid": "4420" 00:15:53.008 }, 00:15:53.008 "peer_address": { 00:15:53.008 "trtype": "TCP", 00:15:53.008 "adrfam": "IPv4", 00:15:53.008 "traddr": "10.0.0.1", 00:15:53.008 "trsvcid": "33128" 00:15:53.008 }, 00:15:53.008 "auth": { 00:15:53.008 "state": "completed", 00:15:53.008 "digest": "sha256", 00:15:53.008 "dhgroup": "ffdhe8192" 00:15:53.008 } 00:15:53.008 } 00:15:53.008 ]' 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.008 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.273 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:53.274 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.841 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.099 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:54.099 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.099 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.100 17:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.667 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.667 { 00:15:54.667 "cntlid": 43, 00:15:54.667 "qid": 0, 00:15:54.667 "state": "enabled", 00:15:54.667 "thread": "nvmf_tgt_poll_group_000", 00:15:54.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:54.667 "listen_address": { 00:15:54.667 "trtype": "TCP", 00:15:54.667 "adrfam": "IPv4", 00:15:54.667 "traddr": "10.0.0.2", 00:15:54.667 "trsvcid": "4420" 00:15:54.667 }, 00:15:54.667 "peer_address": { 00:15:54.667 "trtype": "TCP", 00:15:54.667 "adrfam": "IPv4", 00:15:54.667 "traddr": "10.0.0.1", 00:15:54.667 "trsvcid": "33164" 00:15:54.667 }, 00:15:54.667 "auth": { 00:15:54.667 "state": "completed", 00:15:54.667 "digest": "sha256", 00:15:54.667 "dhgroup": "ffdhe8192" 00:15:54.667 } 00:15:54.667 } 00:15:54.667 ]' 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.667 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:54.926 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:15:55.492 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.492 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.492 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.492 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.751 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.318 00:15:56.318 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.318 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.318 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.577 { 00:15:56.577 "cntlid": 45, 00:15:56.577 "qid": 0, 00:15:56.577 "state": "enabled", 00:15:56.577 "thread": "nvmf_tgt_poll_group_000", 00:15:56.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:56.577 "listen_address": { 00:15:56.577 "trtype": "TCP", 00:15:56.577 "adrfam": "IPv4", 00:15:56.577 "traddr": "10.0.0.2", 00:15:56.577 "trsvcid": "4420" 00:15:56.577 }, 00:15:56.577 "peer_address": { 00:15:56.577 "trtype": "TCP", 00:15:56.577 "adrfam": "IPv4", 00:15:56.577 "traddr": "10.0.0.1", 00:15:56.577 "trsvcid": "33186" 00:15:56.577 }, 00:15:56.577 "auth": { 00:15:56.577 "state": "completed", 00:15:56.577 "digest": "sha256", 00:15:56.577 "dhgroup": "ffdhe8192" 00:15:56.577 } 00:15:56.577 } 00:15:56.577 ]' 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.577 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.836 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:56.836 17:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:15:57.401 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.401 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:57.401 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.402 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.402 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.402 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.402 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:57.402 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.660 17:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.225 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.225 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.483 { 00:15:58.483 "cntlid": 47, 00:15:58.483 "qid": 0, 00:15:58.483 "state": "enabled", 00:15:58.483 "thread": "nvmf_tgt_poll_group_000", 00:15:58.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:58.483 "listen_address": { 00:15:58.483 "trtype": "TCP", 00:15:58.483 "adrfam": "IPv4", 00:15:58.483 "traddr": "10.0.0.2", 00:15:58.483 "trsvcid": "4420" 00:15:58.483 }, 00:15:58.483 "peer_address": { 00:15:58.483 "trtype": "TCP", 00:15:58.483 "adrfam": "IPv4", 00:15:58.483 "traddr": "10.0.0.1", 00:15:58.483 "trsvcid": "33206" 00:15:58.483 }, 00:15:58.483 "auth": { 00:15:58.483 "state": "completed", 00:15:58.483 "digest": "sha256", 00:15:58.483 "dhgroup": "ffdhe8192" 00:15:58.483 } 00:15:58.483 } 00:15:58.483 ]' 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.483 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.741 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:58.741 17:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.308 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.309 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.567 00:15:59.567 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.567 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.567 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.830 { 00:15:59.830 "cntlid": 49, 00:15:59.830 "qid": 0, 00:15:59.830 "state": "enabled", 00:15:59.830 "thread": "nvmf_tgt_poll_group_000", 00:15:59.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:59.830 "listen_address": { 00:15:59.830 "trtype": "TCP", 00:15:59.830 "adrfam": "IPv4", 00:15:59.830 "traddr": "10.0.0.2", 00:15:59.830 "trsvcid": "4420" 00:15:59.830 }, 00:15:59.830 "peer_address": { 00:15:59.830 "trtype": "TCP", 00:15:59.830 "adrfam": "IPv4", 00:15:59.830 "traddr": "10.0.0.1", 00:15:59.830 "trsvcid": "33236" 00:15:59.830 }, 00:15:59.830 "auth": { 00:15:59.830 "state": "completed", 00:15:59.830 "digest": "sha384", 00:15:59.830 "dhgroup": "null" 00:15:59.830 } 00:15:59.830 } 00:15:59.830 ]' 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.830 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.112 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.112 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.112 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.112 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.112 17:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.112 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:00.112 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.717 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.976 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.977 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.236 00:16:01.236 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.236 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.236 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.494 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.494 { 00:16:01.494 "cntlid": 51, 00:16:01.494 "qid": 0, 00:16:01.494 "state": "enabled", 00:16:01.494 "thread": "nvmf_tgt_poll_group_000", 00:16:01.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.495 "listen_address": { 00:16:01.495 "trtype": "TCP", 00:16:01.495 "adrfam": "IPv4", 00:16:01.495 "traddr": "10.0.0.2", 00:16:01.495 "trsvcid": "4420" 00:16:01.495 }, 00:16:01.495 "peer_address": { 00:16:01.495 "trtype": "TCP", 00:16:01.495 "adrfam": "IPv4", 00:16:01.495 "traddr": "10.0.0.1", 00:16:01.495 "trsvcid": "33254" 00:16:01.495 }, 00:16:01.495 "auth": { 00:16:01.495 "state": "completed", 00:16:01.495 "digest": "sha384", 00:16:01.495 "dhgroup": "null" 00:16:01.495 } 00:16:01.495 } 00:16:01.495 ]' 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.495 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.753 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:01.753 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.320 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.579 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.838 00:16:02.838 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.838 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.838 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.097 { 00:16:03.097 "cntlid": 53, 00:16:03.097 "qid": 0, 00:16:03.097 "state": "enabled", 00:16:03.097 "thread": "nvmf_tgt_poll_group_000", 00:16:03.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.097 "listen_address": { 00:16:03.097 "trtype": "TCP", 00:16:03.097 "adrfam": "IPv4", 00:16:03.097 "traddr": "10.0.0.2", 00:16:03.097 "trsvcid": "4420" 00:16:03.097 }, 00:16:03.097 "peer_address": { 00:16:03.097 "trtype": "TCP", 00:16:03.097 "adrfam": "IPv4", 00:16:03.097 "traddr": "10.0.0.1", 00:16:03.097 "trsvcid": "43396" 00:16:03.097 }, 00:16:03.097 "auth": { 00:16:03.097 "state": "completed", 00:16:03.097 "digest": "sha384", 00:16:03.097 "dhgroup": "null" 00:16:03.097 } 00:16:03.097 } 00:16:03.097 ]' 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.097 17:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.097 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.097 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.097 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.356 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:03.356 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:03.922 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.181 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.439 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.439 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.698 { 00:16:04.698 "cntlid": 55, 00:16:04.698 "qid": 0, 00:16:04.698 "state": "enabled", 00:16:04.698 "thread": "nvmf_tgt_poll_group_000", 00:16:04.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:04.698 "listen_address": { 00:16:04.698 "trtype": "TCP", 00:16:04.698 "adrfam": "IPv4", 00:16:04.698 "traddr": "10.0.0.2", 00:16:04.698 "trsvcid": "4420" 00:16:04.698 }, 00:16:04.698 "peer_address": { 00:16:04.698 "trtype": "TCP", 00:16:04.698 "adrfam": "IPv4", 00:16:04.698 "traddr": "10.0.0.1", 00:16:04.698 "trsvcid": "43432" 00:16:04.698 }, 00:16:04.698 "auth": { 00:16:04.698 "state": "completed", 00:16:04.698 "digest": "sha384", 00:16:04.698 "dhgroup": "null" 00:16:04.698 } 00:16:04.698 } 00:16:04.698 ]' 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.698 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.956 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:04.956 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.523 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.782 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.783 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.041 00:16:06.041 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.041 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.041 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.041 { 00:16:06.041 "cntlid": 57, 00:16:06.041 "qid": 0, 00:16:06.041 "state": "enabled", 00:16:06.041 "thread": "nvmf_tgt_poll_group_000", 00:16:06.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.041 "listen_address": { 00:16:06.041 "trtype": "TCP", 00:16:06.041 "adrfam": "IPv4", 00:16:06.041 "traddr": "10.0.0.2", 00:16:06.041 "trsvcid": "4420" 00:16:06.041 }, 00:16:06.041 "peer_address": { 00:16:06.041 "trtype": "TCP", 00:16:06.041 "adrfam": "IPv4", 00:16:06.041 "traddr": "10.0.0.1", 00:16:06.041 "trsvcid": "43450" 00:16:06.041 }, 00:16:06.041 "auth": { 00:16:06.041 "state": "completed", 00:16:06.041 "digest": "sha384", 00:16:06.041 "dhgroup": "ffdhe2048" 00:16:06.041 } 00:16:06.041 } 00:16:06.041 ]' 00:16:06.041 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.300 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.559 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:06.559 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.126 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.126 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:07.126 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.126 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.126 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.126 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.127 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.386 00:16:07.386 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.386 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.386 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.645 { 00:16:07.645 "cntlid": 59, 00:16:07.645 "qid": 0, 00:16:07.645 "state": "enabled", 00:16:07.645 "thread": "nvmf_tgt_poll_group_000", 00:16:07.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:07.645 "listen_address": { 00:16:07.645 "trtype": "TCP", 00:16:07.645 "adrfam": "IPv4", 00:16:07.645 "traddr": "10.0.0.2", 00:16:07.645 "trsvcid": "4420" 00:16:07.645 }, 00:16:07.645 "peer_address": { 00:16:07.645 "trtype": "TCP", 00:16:07.645 "adrfam": "IPv4", 00:16:07.645 "traddr": "10.0.0.1", 00:16:07.645 "trsvcid": "43480" 00:16:07.645 }, 00:16:07.645 "auth": { 00:16:07.645 "state": "completed", 00:16:07.645 "digest": "sha384", 00:16:07.645 "dhgroup": "ffdhe2048" 00:16:07.645 } 00:16:07.645 } 00:16:07.645 ]' 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.645 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:07.904 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:08.838 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.839 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.097 00:16:09.097 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.097 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.097 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.355 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.356 { 00:16:09.356 "cntlid": 61, 00:16:09.356 "qid": 0, 00:16:09.356 "state": "enabled", 00:16:09.356 "thread": "nvmf_tgt_poll_group_000", 00:16:09.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.356 "listen_address": { 00:16:09.356 "trtype": "TCP", 00:16:09.356 "adrfam": "IPv4", 00:16:09.356 "traddr": "10.0.0.2", 00:16:09.356 "trsvcid": "4420" 00:16:09.356 }, 00:16:09.356 "peer_address": { 00:16:09.356 "trtype": "TCP", 00:16:09.356 "adrfam": "IPv4", 00:16:09.356 "traddr": "10.0.0.1", 00:16:09.356 "trsvcid": "43496" 00:16:09.356 }, 00:16:09.356 "auth": { 00:16:09.356 "state": "completed", 00:16:09.356 "digest": "sha384", 00:16:09.356 "dhgroup": "ffdhe2048" 00:16:09.356 } 00:16:09.356 } 00:16:09.356 ]' 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.356 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.614 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:09.614 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.181 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.439 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.440 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.698 00:16:10.698 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.698 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.698 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.957 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.957 { 00:16:10.957 "cntlid": 63, 00:16:10.957 "qid": 0, 00:16:10.957 "state": "enabled", 00:16:10.957 "thread": "nvmf_tgt_poll_group_000", 00:16:10.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.957 "listen_address": { 00:16:10.957 "trtype": "TCP", 00:16:10.957 "adrfam": "IPv4", 00:16:10.957 "traddr": "10.0.0.2", 00:16:10.957 "trsvcid": "4420" 00:16:10.957 }, 00:16:10.957 "peer_address": { 00:16:10.957 "trtype": "TCP", 00:16:10.957 "adrfam": "IPv4", 00:16:10.957 "traddr": "10.0.0.1", 00:16:10.957 "trsvcid": "43530" 00:16:10.958 }, 00:16:10.958 "auth": { 00:16:10.958 "state": "completed", 00:16:10.958 "digest": "sha384", 00:16:10.958 "dhgroup": "ffdhe2048" 00:16:10.958 } 00:16:10.958 } 00:16:10.958 ]' 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.958 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.216 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:11.217 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.783 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.042 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.301 00:16:12.301 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.301 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.301 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.559 { 00:16:12.559 "cntlid": 65, 00:16:12.559 "qid": 0, 00:16:12.559 "state": "enabled", 00:16:12.559 "thread": "nvmf_tgt_poll_group_000", 00:16:12.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.559 "listen_address": { 00:16:12.559 "trtype": "TCP", 00:16:12.559 "adrfam": "IPv4", 00:16:12.559 "traddr": "10.0.0.2", 00:16:12.559 "trsvcid": "4420" 00:16:12.559 }, 00:16:12.559 "peer_address": { 00:16:12.559 "trtype": "TCP", 00:16:12.559 "adrfam": "IPv4", 00:16:12.559 "traddr": "10.0.0.1", 00:16:12.559 "trsvcid": "34202" 00:16:12.559 }, 00:16:12.559 "auth": { 00:16:12.559 "state": "completed", 00:16:12.559 "digest": "sha384", 00:16:12.559 "dhgroup": "ffdhe3072" 00:16:12.559 } 00:16:12.559 } 00:16:12.559 ]' 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.559 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.560 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.560 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.560 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.560 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.560 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.818 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:12.818 17:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.385 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.644 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.903 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.903 { 00:16:13.903 "cntlid": 67, 00:16:13.903 "qid": 0, 00:16:13.903 "state": "enabled", 00:16:13.903 "thread": "nvmf_tgt_poll_group_000", 00:16:13.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.903 "listen_address": { 00:16:13.903 "trtype": "TCP", 00:16:13.903 "adrfam": "IPv4", 00:16:13.903 "traddr": "10.0.0.2", 00:16:13.903 "trsvcid": "4420" 00:16:13.903 }, 00:16:13.903 "peer_address": { 00:16:13.903 "trtype": "TCP", 00:16:13.903 "adrfam": "IPv4", 00:16:13.903 "traddr": "10.0.0.1", 00:16:13.903 "trsvcid": "34230" 00:16:13.903 }, 00:16:13.903 "auth": { 00:16:13.903 "state": "completed", 00:16:13.903 "digest": "sha384", 00:16:13.903 "dhgroup": "ffdhe3072" 00:16:13.903 } 00:16:13.903 } 00:16:13.903 ]' 00:16:13.903 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.162 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.162 17:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.162 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.162 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.162 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.162 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.162 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.421 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:14.421 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.988 17:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.246 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.505 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.505 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.505 { 00:16:15.505 "cntlid": 69, 00:16:15.505 "qid": 0, 00:16:15.505 "state": "enabled", 00:16:15.505 "thread": "nvmf_tgt_poll_group_000", 00:16:15.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.505 "listen_address": { 00:16:15.505 "trtype": "TCP", 00:16:15.505 "adrfam": "IPv4", 00:16:15.505 "traddr": "10.0.0.2", 00:16:15.505 "trsvcid": "4420" 00:16:15.505 }, 00:16:15.505 "peer_address": { 00:16:15.505 "trtype": "TCP", 00:16:15.505 "adrfam": "IPv4", 00:16:15.505 "traddr": "10.0.0.1", 00:16:15.505 "trsvcid": "34268" 00:16:15.505 }, 00:16:15.505 "auth": { 00:16:15.505 "state": "completed", 00:16:15.505 "digest": "sha384", 00:16:15.505 "dhgroup": "ffdhe3072" 00:16:15.505 } 00:16:15.505 } 00:16:15.506 ]' 00:16:15.506 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.506 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.506 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.764 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.764 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.764 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.764 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.764 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.022 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:16.022 17:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.589 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.847 00:16:16.847 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.847 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.847 17:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.105 { 00:16:17.105 "cntlid": 71, 00:16:17.105 "qid": 0, 00:16:17.105 "state": "enabled", 00:16:17.105 "thread": "nvmf_tgt_poll_group_000", 00:16:17.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.105 "listen_address": { 00:16:17.105 "trtype": "TCP", 00:16:17.105 "adrfam": "IPv4", 00:16:17.105 "traddr": "10.0.0.2", 00:16:17.105 "trsvcid": "4420" 00:16:17.105 }, 00:16:17.105 "peer_address": { 00:16:17.105 "trtype": "TCP", 00:16:17.105 "adrfam": "IPv4", 00:16:17.105 "traddr": "10.0.0.1", 00:16:17.105 "trsvcid": "34304" 00:16:17.105 }, 00:16:17.105 "auth": { 00:16:17.105 "state": "completed", 00:16:17.105 "digest": "sha384", 00:16:17.105 "dhgroup": "ffdhe3072" 00:16:17.105 } 00:16:17.105 } 00:16:17.105 ]' 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.105 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.364 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.364 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.364 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.364 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:17.364 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.931 17:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.190 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.448 00:16:18.448 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.448 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.448 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.707 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.707 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.707 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.708 { 00:16:18.708 "cntlid": 73, 00:16:18.708 "qid": 0, 00:16:18.708 "state": "enabled", 00:16:18.708 "thread": "nvmf_tgt_poll_group_000", 00:16:18.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.708 "listen_address": { 00:16:18.708 "trtype": "TCP", 00:16:18.708 "adrfam": "IPv4", 00:16:18.708 "traddr": "10.0.0.2", 00:16:18.708 "trsvcid": "4420" 00:16:18.708 }, 00:16:18.708 "peer_address": { 00:16:18.708 "trtype": "TCP", 00:16:18.708 "adrfam": "IPv4", 00:16:18.708 "traddr": "10.0.0.1", 00:16:18.708 "trsvcid": "34328" 00:16:18.708 }, 00:16:18.708 "auth": { 00:16:18.708 "state": "completed", 00:16:18.708 "digest": "sha384", 00:16:18.708 "dhgroup": "ffdhe4096" 00:16:18.708 } 00:16:18.708 } 00:16:18.708 ]' 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.708 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.966 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:18.966 17:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.535 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.536 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.536 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.796 17:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.055 00:16:20.055 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.055 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.055 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.314 { 00:16:20.314 "cntlid": 75, 00:16:20.314 "qid": 0, 00:16:20.314 "state": "enabled", 00:16:20.314 "thread": "nvmf_tgt_poll_group_000", 00:16:20.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.314 "listen_address": { 00:16:20.314 "trtype": "TCP", 00:16:20.314 "adrfam": "IPv4", 00:16:20.314 "traddr": "10.0.0.2", 00:16:20.314 "trsvcid": "4420" 00:16:20.314 }, 00:16:20.314 "peer_address": { 00:16:20.314 "trtype": "TCP", 00:16:20.314 "adrfam": "IPv4", 00:16:20.314 "traddr": "10.0.0.1", 00:16:20.314 "trsvcid": "34356" 00:16:20.314 }, 00:16:20.314 "auth": { 00:16:20.314 "state": "completed", 00:16:20.314 "digest": "sha384", 00:16:20.314 "dhgroup": "ffdhe4096" 00:16:20.314 } 00:16:20.314 } 00:16:20.314 ]' 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.314 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.573 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:20.573 17:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.140 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.397 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.656 00:16:21.656 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.656 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.656 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.915 { 00:16:21.915 "cntlid": 77, 00:16:21.915 "qid": 0, 00:16:21.915 "state": "enabled", 00:16:21.915 "thread": "nvmf_tgt_poll_group_000", 00:16:21.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:21.915 "listen_address": { 00:16:21.915 "trtype": "TCP", 00:16:21.915 "adrfam": "IPv4", 00:16:21.915 "traddr": "10.0.0.2", 00:16:21.915 "trsvcid": "4420" 00:16:21.915 }, 00:16:21.915 "peer_address": { 00:16:21.915 "trtype": "TCP", 00:16:21.915 "adrfam": "IPv4", 00:16:21.915 "traddr": "10.0.0.1", 00:16:21.915 "trsvcid": "34392" 00:16:21.915 }, 00:16:21.915 "auth": { 00:16:21.915 "state": "completed", 00:16:21.915 "digest": "sha384", 00:16:21.915 "dhgroup": "ffdhe4096" 00:16:21.915 } 00:16:21.915 } 00:16:21.915 ]' 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.915 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.916 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.916 17:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.174 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:22.174 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:22.741 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.741 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.741 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.742 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.742 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.742 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.742 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.742 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.000 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.001 17:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.259 00:16:23.259 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.259 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.259 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.518 { 00:16:23.518 "cntlid": 79, 00:16:23.518 "qid": 0, 00:16:23.518 "state": "enabled", 00:16:23.518 "thread": "nvmf_tgt_poll_group_000", 00:16:23.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.518 "listen_address": { 00:16:23.518 "trtype": "TCP", 00:16:23.518 "adrfam": "IPv4", 00:16:23.518 "traddr": "10.0.0.2", 00:16:23.518 "trsvcid": "4420" 00:16:23.518 }, 00:16:23.518 "peer_address": { 00:16:23.518 "trtype": "TCP", 00:16:23.518 "adrfam": "IPv4", 00:16:23.518 "traddr": "10.0.0.1", 00:16:23.518 "trsvcid": "49186" 00:16:23.518 }, 00:16:23.518 "auth": { 00:16:23.518 "state": "completed", 00:16:23.518 "digest": "sha384", 00:16:23.518 "dhgroup": "ffdhe4096" 00:16:23.518 } 00:16:23.518 } 00:16:23.518 ]' 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.518 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.776 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.776 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.776 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.776 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:23.776 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.343 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.602 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.860 00:16:24.860 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.860 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.860 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.119 { 00:16:25.119 "cntlid": 81, 00:16:25.119 "qid": 0, 00:16:25.119 "state": "enabled", 00:16:25.119 "thread": "nvmf_tgt_poll_group_000", 00:16:25.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.119 "listen_address": { 00:16:25.119 "trtype": "TCP", 00:16:25.119 "adrfam": "IPv4", 00:16:25.119 "traddr": "10.0.0.2", 00:16:25.119 "trsvcid": "4420" 00:16:25.119 }, 00:16:25.119 "peer_address": { 00:16:25.119 "trtype": "TCP", 00:16:25.119 "adrfam": "IPv4", 00:16:25.119 "traddr": "10.0.0.1", 00:16:25.119 "trsvcid": "49218" 00:16:25.119 }, 00:16:25.119 "auth": { 00:16:25.119 "state": "completed", 00:16:25.119 "digest": "sha384", 00:16:25.119 "dhgroup": "ffdhe6144" 00:16:25.119 } 00:16:25.119 } 00:16:25.119 ]' 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.119 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:25.378 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:26.315 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.315 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.573 00:16:26.573 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.573 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.573 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.831 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.831 { 00:16:26.831 "cntlid": 83, 00:16:26.831 "qid": 0, 00:16:26.831 "state": "enabled", 00:16:26.831 "thread": "nvmf_tgt_poll_group_000", 00:16:26.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.831 "listen_address": { 00:16:26.831 "trtype": "TCP", 00:16:26.831 "adrfam": "IPv4", 00:16:26.831 "traddr": "10.0.0.2", 00:16:26.831 "trsvcid": "4420" 00:16:26.831 }, 00:16:26.831 "peer_address": { 00:16:26.831 "trtype": "TCP", 00:16:26.831 "adrfam": "IPv4", 00:16:26.831 "traddr": "10.0.0.1", 00:16:26.831 "trsvcid": "49238" 00:16:26.831 }, 00:16:26.831 "auth": { 00:16:26.831 "state": "completed", 00:16:26.831 "digest": "sha384", 00:16:26.831 "dhgroup": "ffdhe6144" 00:16:26.831 } 00:16:26.831 } 00:16:26.831 ]' 00:16:26.832 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.832 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.832 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.832 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.832 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.090 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.090 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.090 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.090 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:27.090 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.657 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.917 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.176 00:16:28.176 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.176 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.176 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.435 { 00:16:28.435 "cntlid": 85, 00:16:28.435 "qid": 0, 00:16:28.435 "state": "enabled", 00:16:28.435 "thread": "nvmf_tgt_poll_group_000", 00:16:28.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.435 "listen_address": { 00:16:28.435 "trtype": "TCP", 00:16:28.435 "adrfam": "IPv4", 00:16:28.435 "traddr": "10.0.0.2", 00:16:28.435 "trsvcid": "4420" 00:16:28.435 }, 00:16:28.435 "peer_address": { 00:16:28.435 "trtype": "TCP", 00:16:28.435 "adrfam": "IPv4", 00:16:28.435 "traddr": "10.0.0.1", 00:16:28.435 "trsvcid": "49280" 00:16:28.435 }, 00:16:28.435 "auth": { 00:16:28.435 "state": "completed", 00:16:28.435 "digest": "sha384", 00:16:28.435 "dhgroup": "ffdhe6144" 00:16:28.435 } 00:16:28.435 } 00:16:28.435 ]' 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.435 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.693 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.693 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.693 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.693 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:28.693 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:29.261 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.261 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.261 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.261 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.520 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.521 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.087 00:16:30.087 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.087 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.087 17:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.087 { 00:16:30.087 "cntlid": 87, 00:16:30.087 "qid": 0, 00:16:30.087 "state": "enabled", 00:16:30.087 "thread": "nvmf_tgt_poll_group_000", 00:16:30.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.087 "listen_address": { 00:16:30.087 "trtype": "TCP", 00:16:30.087 "adrfam": "IPv4", 00:16:30.087 "traddr": "10.0.0.2", 00:16:30.087 "trsvcid": "4420" 00:16:30.087 }, 00:16:30.087 "peer_address": { 00:16:30.087 "trtype": "TCP", 00:16:30.087 "adrfam": "IPv4", 00:16:30.087 "traddr": "10.0.0.1", 00:16:30.087 "trsvcid": "49292" 00:16:30.087 }, 00:16:30.087 "auth": { 00:16:30.087 "state": "completed", 00:16:30.087 "digest": "sha384", 00:16:30.087 "dhgroup": "ffdhe6144" 00:16:30.087 } 00:16:30.087 } 00:16:30.087 ]' 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.087 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.345 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.345 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.345 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.345 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.345 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.603 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:30.603 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.169 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.169 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.735 00:16:31.735 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.735 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.735 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.992 { 00:16:31.992 "cntlid": 89, 00:16:31.992 "qid": 0, 00:16:31.992 "state": "enabled", 00:16:31.992 "thread": "nvmf_tgt_poll_group_000", 00:16:31.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.992 "listen_address": { 00:16:31.992 "trtype": "TCP", 00:16:31.992 "adrfam": "IPv4", 00:16:31.992 "traddr": "10.0.0.2", 00:16:31.992 "trsvcid": "4420" 00:16:31.992 }, 00:16:31.992 "peer_address": { 00:16:31.992 "trtype": "TCP", 00:16:31.992 "adrfam": "IPv4", 00:16:31.992 "traddr": "10.0.0.1", 00:16:31.992 "trsvcid": "49326" 00:16:31.992 }, 00:16:31.992 "auth": { 00:16:31.992 "state": "completed", 00:16:31.992 "digest": "sha384", 00:16:31.992 "dhgroup": "ffdhe8192" 00:16:31.992 } 00:16:31.992 } 00:16:31.992 ]' 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.992 17:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.250 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:32.250 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.077 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.645 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.645 { 00:16:33.645 "cntlid": 91, 00:16:33.645 "qid": 0, 00:16:33.645 "state": "enabled", 00:16:33.645 "thread": "nvmf_tgt_poll_group_000", 00:16:33.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.645 "listen_address": { 00:16:33.645 "trtype": "TCP", 00:16:33.645 "adrfam": "IPv4", 00:16:33.645 "traddr": "10.0.0.2", 00:16:33.645 "trsvcid": "4420" 00:16:33.645 }, 00:16:33.645 "peer_address": { 00:16:33.645 "trtype": "TCP", 00:16:33.645 "adrfam": "IPv4", 00:16:33.645 "traddr": "10.0.0.1", 00:16:33.645 "trsvcid": "34474" 00:16:33.645 }, 00:16:33.645 "auth": { 00:16:33.645 "state": "completed", 00:16:33.645 "digest": "sha384", 00:16:33.645 "dhgroup": "ffdhe8192" 00:16:33.645 } 00:16:33.645 } 00:16:33.645 ]' 00:16:33.645 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.903 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.904 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.163 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:34.163 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.731 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.990 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.990 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.990 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.990 17:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.248 00:16:35.248 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.248 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.248 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.507 { 00:16:35.507 "cntlid": 93, 00:16:35.507 "qid": 0, 00:16:35.507 "state": "enabled", 00:16:35.507 "thread": "nvmf_tgt_poll_group_000", 00:16:35.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.507 "listen_address": { 00:16:35.507 "trtype": "TCP", 00:16:35.507 "adrfam": "IPv4", 00:16:35.507 "traddr": "10.0.0.2", 00:16:35.507 "trsvcid": "4420" 00:16:35.507 }, 00:16:35.507 "peer_address": { 00:16:35.507 "trtype": "TCP", 00:16:35.507 "adrfam": "IPv4", 00:16:35.507 "traddr": "10.0.0.1", 00:16:35.507 "trsvcid": "34502" 00:16:35.507 }, 00:16:35.507 "auth": { 00:16:35.507 "state": "completed", 00:16:35.507 "digest": "sha384", 00:16:35.507 "dhgroup": "ffdhe8192" 00:16:35.507 } 00:16:35.507 } 00:16:35.507 ]' 00:16:35.508 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.508 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.508 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.508 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.508 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.766 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.766 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.766 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.766 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:35.766 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.334 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.593 17:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.161 00:16:37.161 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.161 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.161 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.498 { 00:16:37.498 "cntlid": 95, 00:16:37.498 "qid": 0, 00:16:37.498 "state": "enabled", 00:16:37.498 "thread": "nvmf_tgt_poll_group_000", 00:16:37.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.498 "listen_address": { 00:16:37.498 "trtype": "TCP", 00:16:37.498 "adrfam": "IPv4", 00:16:37.498 "traddr": "10.0.0.2", 00:16:37.498 "trsvcid": "4420" 00:16:37.498 }, 00:16:37.498 "peer_address": { 00:16:37.498 "trtype": "TCP", 00:16:37.498 "adrfam": "IPv4", 00:16:37.498 "traddr": "10.0.0.1", 00:16:37.498 "trsvcid": "34536" 00:16:37.498 }, 00:16:37.498 "auth": { 00:16:37.498 "state": "completed", 00:16:37.498 "digest": "sha384", 00:16:37.498 "dhgroup": "ffdhe8192" 00:16:37.498 } 00:16:37.498 } 00:16:37.498 ]' 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.498 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:37.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.378 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.638 00:16:38.638 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.638 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.638 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.897 { 00:16:38.897 "cntlid": 97, 00:16:38.897 "qid": 0, 00:16:38.897 "state": "enabled", 00:16:38.897 "thread": "nvmf_tgt_poll_group_000", 00:16:38.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.897 "listen_address": { 00:16:38.897 "trtype": "TCP", 00:16:38.897 "adrfam": "IPv4", 00:16:38.897 "traddr": "10.0.0.2", 00:16:38.897 "trsvcid": "4420" 00:16:38.897 }, 00:16:38.897 "peer_address": { 00:16:38.897 "trtype": "TCP", 00:16:38.897 "adrfam": "IPv4", 00:16:38.897 "traddr": "10.0.0.1", 00:16:38.897 "trsvcid": "34550" 00:16:38.897 }, 00:16:38.897 "auth": { 00:16:38.897 "state": "completed", 00:16:38.897 "digest": "sha512", 00:16:38.897 "dhgroup": "null" 00:16:38.897 } 00:16:38.897 } 00:16:38.897 ]' 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.897 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.156 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.156 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.156 17:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.156 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:39.156 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.723 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:39.724 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.982 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.241 00:16:40.241 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.241 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.241 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.500 { 00:16:40.500 "cntlid": 99, 00:16:40.500 "qid": 0, 00:16:40.500 "state": "enabled", 00:16:40.500 "thread": "nvmf_tgt_poll_group_000", 00:16:40.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.500 "listen_address": { 00:16:40.500 "trtype": "TCP", 00:16:40.500 "adrfam": "IPv4", 00:16:40.500 "traddr": "10.0.0.2", 00:16:40.500 "trsvcid": "4420" 00:16:40.500 }, 00:16:40.500 "peer_address": { 00:16:40.500 "trtype": "TCP", 00:16:40.500 "adrfam": "IPv4", 00:16:40.500 "traddr": "10.0.0.1", 00:16:40.500 "trsvcid": "34578" 00:16:40.500 }, 00:16:40.500 "auth": { 00:16:40.500 "state": "completed", 00:16:40.500 "digest": "sha512", 00:16:40.500 "dhgroup": "null" 00:16:40.500 } 00:16:40.500 } 00:16:40.500 ]' 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.500 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.759 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:40.759 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.324 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.582 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.841 00:16:41.841 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.841 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.841 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.099 { 00:16:42.099 "cntlid": 101, 00:16:42.099 "qid": 0, 00:16:42.099 "state": "enabled", 00:16:42.099 "thread": "nvmf_tgt_poll_group_000", 00:16:42.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.099 "listen_address": { 00:16:42.099 "trtype": "TCP", 00:16:42.099 "adrfam": "IPv4", 00:16:42.099 "traddr": "10.0.0.2", 00:16:42.099 "trsvcid": "4420" 00:16:42.099 }, 00:16:42.099 "peer_address": { 00:16:42.099 "trtype": "TCP", 00:16:42.099 "adrfam": "IPv4", 00:16:42.099 "traddr": "10.0.0.1", 00:16:42.099 "trsvcid": "34596" 00:16:42.099 }, 00:16:42.099 "auth": { 00:16:42.099 "state": "completed", 00:16:42.099 "digest": "sha512", 00:16:42.099 "dhgroup": "null" 00:16:42.099 } 00:16:42.099 } 00:16:42.099 ]' 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.099 17:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.099 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.099 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.099 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.099 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.099 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.358 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:42.358 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.926 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.185 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.443 00:16:43.443 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.443 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.443 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.703 { 00:16:43.703 "cntlid": 103, 00:16:43.703 "qid": 0, 00:16:43.703 "state": "enabled", 00:16:43.703 "thread": "nvmf_tgt_poll_group_000", 00:16:43.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.703 "listen_address": { 00:16:43.703 "trtype": "TCP", 00:16:43.703 "adrfam": "IPv4", 00:16:43.703 "traddr": "10.0.0.2", 00:16:43.703 "trsvcid": "4420" 00:16:43.703 }, 00:16:43.703 "peer_address": { 00:16:43.703 "trtype": "TCP", 00:16:43.703 "adrfam": "IPv4", 00:16:43.703 "traddr": "10.0.0.1", 00:16:43.703 "trsvcid": "51584" 00:16:43.703 }, 00:16:43.703 "auth": { 00:16:43.703 "state": "completed", 00:16:43.703 "digest": "sha512", 00:16:43.703 "dhgroup": "null" 00:16:43.703 } 00:16:43.703 } 00:16:43.703 ]' 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.703 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.961 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:43.961 17:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.527 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.786 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.046 00:16:45.046 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.046 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.046 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.046 { 00:16:45.046 "cntlid": 105, 00:16:45.046 "qid": 0, 00:16:45.046 "state": "enabled", 00:16:45.046 "thread": "nvmf_tgt_poll_group_000", 00:16:45.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.046 "listen_address": { 00:16:45.046 "trtype": "TCP", 00:16:45.046 "adrfam": "IPv4", 00:16:45.046 "traddr": "10.0.0.2", 00:16:45.046 "trsvcid": "4420" 00:16:45.046 }, 00:16:45.046 "peer_address": { 00:16:45.046 "trtype": "TCP", 00:16:45.046 "adrfam": "IPv4", 00:16:45.046 "traddr": "10.0.0.1", 00:16:45.046 "trsvcid": "51608" 00:16:45.046 }, 00:16:45.046 "auth": { 00:16:45.046 "state": "completed", 00:16:45.046 "digest": "sha512", 00:16:45.046 "dhgroup": "ffdhe2048" 00:16:45.046 } 00:16:45.046 } 00:16:45.046 ]' 00:16:45.046 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.304 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.563 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:45.563 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.130 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.130 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.390 00:16:46.390 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.390 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.390 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.648 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.648 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.648 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.649 { 00:16:46.649 "cntlid": 107, 00:16:46.649 "qid": 0, 00:16:46.649 "state": "enabled", 00:16:46.649 "thread": "nvmf_tgt_poll_group_000", 00:16:46.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.649 "listen_address": { 00:16:46.649 "trtype": "TCP", 00:16:46.649 "adrfam": "IPv4", 00:16:46.649 "traddr": "10.0.0.2", 00:16:46.649 "trsvcid": "4420" 00:16:46.649 }, 00:16:46.649 "peer_address": { 00:16:46.649 "trtype": "TCP", 00:16:46.649 "adrfam": "IPv4", 00:16:46.649 "traddr": "10.0.0.1", 00:16:46.649 "trsvcid": "51644" 00:16:46.649 }, 00:16:46.649 "auth": { 00:16:46.649 "state": "completed", 00:16:46.649 "digest": "sha512", 00:16:46.649 "dhgroup": "ffdhe2048" 00:16:46.649 } 00:16:46.649 } 00:16:46.649 ]' 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.649 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.908 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.908 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.908 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.908 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:46.908 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.476 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.735 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.736 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.012 00:16:48.012 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.013 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.013 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.272 { 00:16:48.272 "cntlid": 109, 00:16:48.272 "qid": 0, 00:16:48.272 "state": "enabled", 00:16:48.272 "thread": "nvmf_tgt_poll_group_000", 00:16:48.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.272 "listen_address": { 00:16:48.272 "trtype": "TCP", 00:16:48.272 "adrfam": "IPv4", 00:16:48.272 "traddr": "10.0.0.2", 00:16:48.272 "trsvcid": "4420" 00:16:48.272 }, 00:16:48.272 "peer_address": { 00:16:48.272 "trtype": "TCP", 00:16:48.272 "adrfam": "IPv4", 00:16:48.272 "traddr": "10.0.0.1", 00:16:48.272 "trsvcid": "51666" 00:16:48.272 }, 00:16:48.272 "auth": { 00:16:48.272 "state": "completed", 00:16:48.272 "digest": "sha512", 00:16:48.272 "dhgroup": "ffdhe2048" 00:16:48.272 } 00:16:48.272 } 00:16:48.272 ]' 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.272 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.531 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:48.531 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:49.098 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.098 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.358 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.617 00:16:49.617 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.617 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.617 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.877 { 00:16:49.877 "cntlid": 111, 00:16:49.877 "qid": 0, 00:16:49.877 "state": "enabled", 00:16:49.877 "thread": "nvmf_tgt_poll_group_000", 00:16:49.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.877 "listen_address": { 00:16:49.877 "trtype": "TCP", 00:16:49.877 "adrfam": "IPv4", 00:16:49.877 "traddr": "10.0.0.2", 00:16:49.877 "trsvcid": "4420" 00:16:49.877 }, 00:16:49.877 "peer_address": { 00:16:49.877 "trtype": "TCP", 00:16:49.877 "adrfam": "IPv4", 00:16:49.877 "traddr": "10.0.0.1", 00:16:49.877 "trsvcid": "51696" 00:16:49.877 }, 00:16:49.877 "auth": { 00:16:49.877 "state": "completed", 00:16:49.877 "digest": "sha512", 00:16:49.877 "dhgroup": "ffdhe2048" 00:16:49.877 } 00:16:49.877 } 00:16:49.877 ]' 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.877 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.136 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:50.136 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.705 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.964 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.965 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.965 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.965 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.965 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.223 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.223 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.482 { 00:16:51.482 "cntlid": 113, 00:16:51.482 "qid": 0, 00:16:51.482 "state": "enabled", 00:16:51.482 "thread": "nvmf_tgt_poll_group_000", 00:16:51.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.482 "listen_address": { 00:16:51.482 "trtype": "TCP", 00:16:51.482 "adrfam": "IPv4", 00:16:51.482 "traddr": "10.0.0.2", 00:16:51.482 "trsvcid": "4420" 00:16:51.482 }, 00:16:51.482 "peer_address": { 00:16:51.482 "trtype": "TCP", 00:16:51.482 "adrfam": "IPv4", 00:16:51.482 "traddr": "10.0.0.1", 00:16:51.482 "trsvcid": "51716" 00:16:51.482 }, 00:16:51.482 "auth": { 00:16:51.482 "state": "completed", 00:16:51.482 "digest": "sha512", 00:16:51.482 "dhgroup": "ffdhe3072" 00:16:51.482 } 00:16:51.482 } 00:16:51.482 ]' 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.482 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.741 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:51.741 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.309 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.568 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.828 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.828 { 00:16:52.828 "cntlid": 115, 00:16:52.828 "qid": 0, 00:16:52.828 "state": "enabled", 00:16:52.828 "thread": "nvmf_tgt_poll_group_000", 00:16:52.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.828 "listen_address": { 00:16:52.828 "trtype": "TCP", 00:16:52.828 "adrfam": "IPv4", 00:16:52.828 "traddr": "10.0.0.2", 00:16:52.828 "trsvcid": "4420" 00:16:52.828 }, 00:16:52.828 "peer_address": { 00:16:52.828 "trtype": "TCP", 00:16:52.828 "adrfam": "IPv4", 00:16:52.828 "traddr": "10.0.0.1", 00:16:52.828 "trsvcid": "58230" 00:16:52.828 }, 00:16:52.828 "auth": { 00:16:52.828 "state": "completed", 00:16:52.828 "digest": "sha512", 00:16:52.828 "dhgroup": "ffdhe3072" 00:16:52.828 } 00:16:52.828 } 00:16:52.828 ]' 00:16:52.828 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.087 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.346 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:53.346 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.915 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.174 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.434 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.434 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.434 { 00:16:54.434 "cntlid": 117, 00:16:54.434 "qid": 0, 00:16:54.434 "state": "enabled", 00:16:54.434 "thread": "nvmf_tgt_poll_group_000", 00:16:54.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.434 "listen_address": { 00:16:54.434 "trtype": "TCP", 00:16:54.434 "adrfam": "IPv4", 00:16:54.434 "traddr": "10.0.0.2", 00:16:54.434 "trsvcid": "4420" 00:16:54.434 }, 00:16:54.434 "peer_address": { 00:16:54.434 "trtype": "TCP", 00:16:54.434 "adrfam": "IPv4", 00:16:54.434 "traddr": "10.0.0.1", 00:16:54.434 "trsvcid": "58264" 00:16:54.434 }, 00:16:54.434 "auth": { 00:16:54.434 "state": "completed", 00:16:54.434 "digest": "sha512", 00:16:54.434 "dhgroup": "ffdhe3072" 00:16:54.434 } 00:16:54.434 } 00:16:54.434 ]' 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.692 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.693 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.693 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.952 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:54.952 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.520 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.779 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:55.779 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.779 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.780 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.038 00:16:56.039 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.039 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.039 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.039 { 00:16:56.039 "cntlid": 119, 00:16:56.039 "qid": 0, 00:16:56.039 "state": "enabled", 00:16:56.039 "thread": "nvmf_tgt_poll_group_000", 00:16:56.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.039 "listen_address": { 00:16:56.039 "trtype": "TCP", 00:16:56.039 "adrfam": "IPv4", 00:16:56.039 "traddr": "10.0.0.2", 00:16:56.039 "trsvcid": "4420" 00:16:56.039 }, 00:16:56.039 "peer_address": { 00:16:56.039 "trtype": "TCP", 00:16:56.039 "adrfam": "IPv4", 00:16:56.039 "traddr": "10.0.0.1", 00:16:56.039 "trsvcid": "58296" 00:16:56.039 }, 00:16:56.039 "auth": { 00:16:56.039 "state": "completed", 00:16:56.039 "digest": "sha512", 00:16:56.039 "dhgroup": "ffdhe3072" 00:16:56.039 } 00:16:56.039 } 00:16:56.039 ]' 00:16:56.039 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.297 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.556 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:56.557 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.125 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.384 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.643 00:16:57.643 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.643 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.643 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.902 { 00:16:57.902 "cntlid": 121, 00:16:57.902 "qid": 0, 00:16:57.902 "state": "enabled", 00:16:57.902 "thread": "nvmf_tgt_poll_group_000", 00:16:57.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.902 "listen_address": { 00:16:57.902 "trtype": "TCP", 00:16:57.902 "adrfam": "IPv4", 00:16:57.902 "traddr": "10.0.0.2", 00:16:57.902 "trsvcid": "4420" 00:16:57.902 }, 00:16:57.902 "peer_address": { 00:16:57.902 "trtype": "TCP", 00:16:57.902 "adrfam": "IPv4", 00:16:57.902 "traddr": "10.0.0.1", 00:16:57.902 "trsvcid": "58322" 00:16:57.902 }, 00:16:57.902 "auth": { 00:16:57.902 "state": "completed", 00:16:57.902 "digest": "sha512", 00:16:57.902 "dhgroup": "ffdhe4096" 00:16:57.902 } 00:16:57.902 } 00:16:57.902 ]' 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.902 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.162 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:58.162 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.730 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.989 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.248 00:16:59.248 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.248 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.248 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.507 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.507 { 00:16:59.507 "cntlid": 123, 00:16:59.507 "qid": 0, 00:16:59.507 "state": "enabled", 00:16:59.507 "thread": "nvmf_tgt_poll_group_000", 00:16:59.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.507 "listen_address": { 00:16:59.507 "trtype": "TCP", 00:16:59.507 "adrfam": "IPv4", 00:16:59.508 "traddr": "10.0.0.2", 00:16:59.508 "trsvcid": "4420" 00:16:59.508 }, 00:16:59.508 "peer_address": { 00:16:59.508 "trtype": "TCP", 00:16:59.508 "adrfam": "IPv4", 00:16:59.508 "traddr": "10.0.0.1", 00:16:59.508 "trsvcid": "58348" 00:16:59.508 }, 00:16:59.508 "auth": { 00:16:59.508 "state": "completed", 00:16:59.508 "digest": "sha512", 00:16:59.508 "dhgroup": "ffdhe4096" 00:16:59.508 } 00:16:59.508 } 00:16:59.508 ]' 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.508 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.767 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:16:59.767 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.335 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.594 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.854 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.854 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.113 { 00:17:01.113 "cntlid": 125, 00:17:01.113 "qid": 0, 00:17:01.113 "state": "enabled", 00:17:01.113 "thread": "nvmf_tgt_poll_group_000", 00:17:01.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.113 "listen_address": { 00:17:01.113 "trtype": "TCP", 00:17:01.113 "adrfam": "IPv4", 00:17:01.113 "traddr": "10.0.0.2", 00:17:01.113 "trsvcid": "4420" 00:17:01.113 }, 00:17:01.113 "peer_address": { 00:17:01.113 "trtype": "TCP", 00:17:01.113 "adrfam": "IPv4", 00:17:01.113 "traddr": "10.0.0.1", 00:17:01.113 "trsvcid": "58370" 00:17:01.113 }, 00:17:01.113 "auth": { 00:17:01.113 "state": "completed", 00:17:01.113 "digest": "sha512", 00:17:01.113 "dhgroup": "ffdhe4096" 00:17:01.113 } 00:17:01.113 } 00:17:01.113 ]' 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.113 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.113 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.113 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.113 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.371 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:01.371 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.938 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.197 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.457 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.457 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.716 { 00:17:02.716 "cntlid": 127, 00:17:02.716 "qid": 0, 00:17:02.716 "state": "enabled", 00:17:02.716 "thread": "nvmf_tgt_poll_group_000", 00:17:02.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.716 "listen_address": { 00:17:02.716 "trtype": "TCP", 00:17:02.716 "adrfam": "IPv4", 00:17:02.716 "traddr": "10.0.0.2", 00:17:02.716 "trsvcid": "4420" 00:17:02.716 }, 00:17:02.716 "peer_address": { 00:17:02.716 "trtype": "TCP", 00:17:02.716 "adrfam": "IPv4", 00:17:02.716 "traddr": "10.0.0.1", 00:17:02.716 "trsvcid": "43048" 00:17:02.716 }, 00:17:02.716 "auth": { 00:17:02.716 "state": "completed", 00:17:02.716 "digest": "sha512", 00:17:02.716 "dhgroup": "ffdhe4096" 00:17:02.716 } 00:17:02.716 } 00:17:02.716 ]' 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.716 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.975 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:02.975 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.543 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.801 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.802 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.802 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.802 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.060 00:17:04.060 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.060 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.060 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.319 { 00:17:04.319 "cntlid": 129, 00:17:04.319 "qid": 0, 00:17:04.319 "state": "enabled", 00:17:04.319 "thread": "nvmf_tgt_poll_group_000", 00:17:04.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.319 "listen_address": { 00:17:04.319 "trtype": "TCP", 00:17:04.319 "adrfam": "IPv4", 00:17:04.319 "traddr": "10.0.0.2", 00:17:04.319 "trsvcid": "4420" 00:17:04.319 }, 00:17:04.319 "peer_address": { 00:17:04.319 "trtype": "TCP", 00:17:04.319 "adrfam": "IPv4", 00:17:04.319 "traddr": "10.0.0.1", 00:17:04.319 "trsvcid": "43078" 00:17:04.319 }, 00:17:04.319 "auth": { 00:17:04.319 "state": "completed", 00:17:04.319 "digest": "sha512", 00:17:04.319 "dhgroup": "ffdhe6144" 00:17:04.319 } 00:17:04.319 } 00:17:04.319 ]' 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.319 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.578 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:04.578 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.145 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.404 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.662 00:17:05.662 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.662 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.662 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.921 { 00:17:05.921 "cntlid": 131, 00:17:05.921 "qid": 0, 00:17:05.921 "state": "enabled", 00:17:05.921 "thread": "nvmf_tgt_poll_group_000", 00:17:05.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.921 "listen_address": { 00:17:05.921 "trtype": "TCP", 00:17:05.921 "adrfam": "IPv4", 00:17:05.921 "traddr": "10.0.0.2", 00:17:05.921 "trsvcid": "4420" 00:17:05.921 }, 00:17:05.921 "peer_address": { 00:17:05.921 "trtype": "TCP", 00:17:05.921 "adrfam": "IPv4", 00:17:05.921 "traddr": "10.0.0.1", 00:17:05.921 "trsvcid": "43106" 00:17:05.921 }, 00:17:05.921 "auth": { 00:17:05.921 "state": "completed", 00:17:05.921 "digest": "sha512", 00:17:05.921 "dhgroup": "ffdhe6144" 00:17:05.921 } 00:17:05.921 } 00:17:05.921 ]' 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.921 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.180 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.180 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.180 17:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.180 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:17:06.180 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.748 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.007 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.266 00:17:07.266 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.266 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.266 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.525 { 00:17:07.525 "cntlid": 133, 00:17:07.525 "qid": 0, 00:17:07.525 "state": "enabled", 00:17:07.525 "thread": "nvmf_tgt_poll_group_000", 00:17:07.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.525 "listen_address": { 00:17:07.525 "trtype": "TCP", 00:17:07.525 "adrfam": "IPv4", 00:17:07.525 "traddr": "10.0.0.2", 00:17:07.525 "trsvcid": "4420" 00:17:07.525 }, 00:17:07.525 "peer_address": { 00:17:07.525 "trtype": "TCP", 00:17:07.525 "adrfam": "IPv4", 00:17:07.525 "traddr": "10.0.0.1", 00:17:07.525 "trsvcid": "43132" 00:17:07.525 }, 00:17:07.525 "auth": { 00:17:07.525 "state": "completed", 00:17:07.525 "digest": "sha512", 00:17:07.525 "dhgroup": "ffdhe6144" 00:17:07.525 } 00:17:07.525 } 00:17:07.525 ]' 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.525 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.784 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:07.785 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.353 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.612 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.180 00:17:09.180 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.180 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.180 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.180 { 00:17:09.180 "cntlid": 135, 00:17:09.180 "qid": 0, 00:17:09.180 "state": "enabled", 00:17:09.180 "thread": "nvmf_tgt_poll_group_000", 00:17:09.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.180 "listen_address": { 00:17:09.180 "trtype": "TCP", 00:17:09.180 "adrfam": "IPv4", 00:17:09.180 "traddr": "10.0.0.2", 00:17:09.180 "trsvcid": "4420" 00:17:09.180 }, 00:17:09.180 "peer_address": { 00:17:09.180 "trtype": "TCP", 00:17:09.180 "adrfam": "IPv4", 00:17:09.180 "traddr": "10.0.0.1", 00:17:09.180 "trsvcid": "43160" 00:17:09.180 }, 00:17:09.180 "auth": { 00:17:09.180 "state": "completed", 00:17:09.180 "digest": "sha512", 00:17:09.180 "dhgroup": "ffdhe6144" 00:17:09.180 } 00:17:09.180 } 00:17:09.180 ]' 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.180 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:09.439 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:10.006 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.006 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.006 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.006 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.266 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.266 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.266 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.831 00:17:10.831 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.831 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.831 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.088 { 00:17:11.088 "cntlid": 137, 00:17:11.088 "qid": 0, 00:17:11.088 "state": "enabled", 00:17:11.088 "thread": "nvmf_tgt_poll_group_000", 00:17:11.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.088 "listen_address": { 00:17:11.088 "trtype": "TCP", 00:17:11.088 "adrfam": "IPv4", 00:17:11.088 "traddr": "10.0.0.2", 00:17:11.088 "trsvcid": "4420" 00:17:11.088 }, 00:17:11.088 "peer_address": { 00:17:11.088 "trtype": "TCP", 00:17:11.088 "adrfam": "IPv4", 00:17:11.088 "traddr": "10.0.0.1", 00:17:11.088 "trsvcid": "43186" 00:17:11.088 }, 00:17:11.088 "auth": { 00:17:11.088 "state": "completed", 00:17:11.088 "digest": "sha512", 00:17:11.088 "dhgroup": "ffdhe8192" 00:17:11.088 } 00:17:11.088 } 00:17:11.088 ]' 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.088 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.088 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.088 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.088 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.088 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.088 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.346 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:11.347 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:11.912 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.912 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.912 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.912 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.912 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.913 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.913 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:11.913 17:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.171 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:12.171 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.172 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.750 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.750 { 00:17:12.750 "cntlid": 139, 00:17:12.750 "qid": 0, 00:17:12.750 "state": "enabled", 00:17:12.750 "thread": "nvmf_tgt_poll_group_000", 00:17:12.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.750 "listen_address": { 00:17:12.750 "trtype": "TCP", 00:17:12.750 "adrfam": "IPv4", 00:17:12.750 "traddr": "10.0.0.2", 00:17:12.750 "trsvcid": "4420" 00:17:12.750 }, 00:17:12.750 "peer_address": { 00:17:12.750 "trtype": "TCP", 00:17:12.750 "adrfam": "IPv4", 00:17:12.750 "traddr": "10.0.0.1", 00:17:12.750 "trsvcid": "47762" 00:17:12.750 }, 00:17:12.750 "auth": { 00:17:12.750 "state": "completed", 00:17:12.750 "digest": "sha512", 00:17:12.750 "dhgroup": "ffdhe8192" 00:17:12.750 } 00:17:12.750 } 00:17:12.750 ]' 00:17:12.750 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.010 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.268 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:17:13.269 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: --dhchap-ctrl-secret DHHC-1:02:YzFkOTUxYzE5MmNkZTY4YTZhNDgyNzUwMzRhYjk3MjBmYzE2MzkxNjNmMTMwZDkzbTCZgg==: 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.836 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.837 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.405 00:17:14.405 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.405 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.405 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.664 { 00:17:14.664 "cntlid": 141, 00:17:14.664 "qid": 0, 00:17:14.664 "state": "enabled", 00:17:14.664 "thread": "nvmf_tgt_poll_group_000", 00:17:14.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.664 "listen_address": { 00:17:14.664 "trtype": "TCP", 00:17:14.664 "adrfam": "IPv4", 00:17:14.664 "traddr": "10.0.0.2", 00:17:14.664 "trsvcid": "4420" 00:17:14.664 }, 00:17:14.664 "peer_address": { 00:17:14.664 "trtype": "TCP", 00:17:14.664 "adrfam": "IPv4", 00:17:14.664 "traddr": "10.0.0.1", 00:17:14.664 "trsvcid": "47794" 00:17:14.664 }, 00:17:14.664 "auth": { 00:17:14.664 "state": "completed", 00:17:14.664 "digest": "sha512", 00:17:14.664 "dhgroup": "ffdhe8192" 00:17:14.664 } 00:17:14.664 } 00:17:14.664 ]' 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.664 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.922 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:14.922 17:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:01:ZjVkNGQyM2ZlYWIyZmYxYTFmZjBlYzI1Mjc3NzIwZTeuNFiX: 00:17:15.526 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.526 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.526 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.526 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.526 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.527 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.527 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.527 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.807 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.119 00:17:16.119 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.119 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.119 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.378 { 00:17:16.378 "cntlid": 143, 00:17:16.378 "qid": 0, 00:17:16.378 "state": "enabled", 00:17:16.378 "thread": "nvmf_tgt_poll_group_000", 00:17:16.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.378 "listen_address": { 00:17:16.378 "trtype": "TCP", 00:17:16.378 "adrfam": "IPv4", 00:17:16.378 "traddr": "10.0.0.2", 00:17:16.378 "trsvcid": "4420" 00:17:16.378 }, 00:17:16.378 "peer_address": { 00:17:16.378 "trtype": "TCP", 00:17:16.378 "adrfam": "IPv4", 00:17:16.378 "traddr": "10.0.0.1", 00:17:16.378 "trsvcid": "47818" 00:17:16.378 }, 00:17:16.378 "auth": { 00:17:16.378 "state": "completed", 00:17:16.378 "digest": "sha512", 00:17:16.378 "dhgroup": "ffdhe8192" 00:17:16.378 } 00:17:16.378 } 00:17:16.378 ]' 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.378 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:16.636 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.203 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.462 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.029 00:17:18.029 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.029 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.029 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.288 { 00:17:18.288 "cntlid": 145, 00:17:18.288 "qid": 0, 00:17:18.288 "state": "enabled", 00:17:18.288 "thread": "nvmf_tgt_poll_group_000", 00:17:18.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.288 "listen_address": { 00:17:18.288 "trtype": "TCP", 00:17:18.288 "adrfam": "IPv4", 00:17:18.288 "traddr": "10.0.0.2", 00:17:18.288 "trsvcid": "4420" 00:17:18.288 }, 00:17:18.288 "peer_address": { 00:17:18.288 "trtype": "TCP", 00:17:18.288 "adrfam": "IPv4", 00:17:18.288 "traddr": "10.0.0.1", 00:17:18.288 "trsvcid": "47828" 00:17:18.288 }, 00:17:18.288 "auth": { 00:17:18.288 "state": "completed", 00:17:18.288 "digest": "sha512", 00:17:18.288 "dhgroup": "ffdhe8192" 00:17:18.288 } 00:17:18.288 } 00:17:18.288 ]' 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.288 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.547 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:18.547 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NDBiODlkOGI4MDZjOWY4NTRjOGE4YWJjZjZhNzE4OGJiZTlkOWExMmQ4NDJlNTBjfD2DxA==: --dhchap-ctrl-secret DHHC-1:03:ZGE5ZjhiMGQ0NzE1ZmZhYWZmY2E0ZmIyYWJjYzgxNDVkZmVjYmY3NjIxMTZhMzMyYzk3YTQwMjljYWRlNWRiNUfCdRY=: 00:17:19.114 17:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:19.114 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:19.682 request: 00:17:19.682 { 00:17:19.682 "name": "nvme0", 00:17:19.682 "trtype": "tcp", 00:17:19.682 "traddr": "10.0.0.2", 00:17:19.682 "adrfam": "ipv4", 00:17:19.682 "trsvcid": "4420", 00:17:19.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.682 "prchk_reftag": false, 00:17:19.682 "prchk_guard": false, 00:17:19.682 "hdgst": false, 00:17:19.682 "ddgst": false, 00:17:19.682 "dhchap_key": "key2", 00:17:19.682 "allow_unrecognized_csi": false, 00:17:19.682 "method": "bdev_nvme_attach_controller", 00:17:19.682 "req_id": 1 00:17:19.682 } 00:17:19.682 Got JSON-RPC error response 00:17:19.682 response: 00:17:19.682 { 00:17:19.682 "code": -5, 00:17:19.682 "message": "Input/output error" 00:17:19.682 } 00:17:19.682 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.682 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.682 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.682 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.682 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.683 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:19.947 request: 00:17:19.947 { 00:17:19.947 "name": "nvme0", 00:17:19.947 "trtype": "tcp", 00:17:19.947 "traddr": "10.0.0.2", 00:17:19.947 "adrfam": "ipv4", 00:17:19.947 "trsvcid": "4420", 00:17:19.947 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.947 "prchk_reftag": false, 00:17:19.947 "prchk_guard": false, 00:17:19.947 "hdgst": false, 00:17:19.947 "ddgst": false, 00:17:19.947 "dhchap_key": "key1", 00:17:19.947 "dhchap_ctrlr_key": "ckey2", 00:17:19.947 "allow_unrecognized_csi": false, 00:17:19.947 "method": "bdev_nvme_attach_controller", 00:17:19.947 "req_id": 1 00:17:19.947 } 00:17:19.947 Got JSON-RPC error response 00:17:19.947 response: 00:17:19.948 { 00:17:19.948 "code": -5, 00:17:19.948 "message": "Input/output error" 00:17:19.948 } 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.948 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.519 request: 00:17:20.519 { 00:17:20.519 "name": "nvme0", 00:17:20.519 "trtype": "tcp", 00:17:20.519 "traddr": "10.0.0.2", 00:17:20.519 "adrfam": "ipv4", 00:17:20.519 "trsvcid": "4420", 00:17:20.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.519 "prchk_reftag": false, 00:17:20.519 "prchk_guard": false, 00:17:20.519 "hdgst": false, 00:17:20.519 "ddgst": false, 00:17:20.519 "dhchap_key": "key1", 00:17:20.519 "dhchap_ctrlr_key": "ckey1", 00:17:20.519 "allow_unrecognized_csi": false, 00:17:20.519 "method": "bdev_nvme_attach_controller", 00:17:20.519 "req_id": 1 00:17:20.519 } 00:17:20.519 Got JSON-RPC error response 00:17:20.519 response: 00:17:20.519 { 00:17:20.519 "code": -5, 00:17:20.519 "message": "Input/output error" 00:17:20.519 } 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2474905 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2474905 ']' 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2474905 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474905 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474905' 00:17:20.519 killing process with pid 2474905 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2474905 00:17:20.519 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2474905 00:17:20.778 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:20.778 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.778 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.778 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.778 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2496412 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2496412 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2496412 ']' 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.779 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2496412 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2496412 ']' 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.038 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.297 null0 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RF9 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vng ]] 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vng 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.leL 00:17:21.297 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.xok ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xok 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lOS 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZdG ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZdG 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z04 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.298 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.234 nvme0n1 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.235 { 00:17:22.235 "cntlid": 1, 00:17:22.235 "qid": 0, 00:17:22.235 "state": "enabled", 00:17:22.235 "thread": "nvmf_tgt_poll_group_000", 00:17:22.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:22.235 "listen_address": { 00:17:22.235 "trtype": "TCP", 00:17:22.235 "adrfam": "IPv4", 00:17:22.235 "traddr": "10.0.0.2", 00:17:22.235 "trsvcid": "4420" 00:17:22.235 }, 00:17:22.235 "peer_address": { 00:17:22.235 "trtype": "TCP", 00:17:22.235 "adrfam": "IPv4", 00:17:22.235 "traddr": "10.0.0.1", 00:17:22.235 "trsvcid": "47884" 00:17:22.235 }, 00:17:22.235 "auth": { 00:17:22.235 "state": "completed", 00:17:22.235 "digest": "sha512", 00:17:22.235 "dhgroup": "ffdhe8192" 00:17:22.235 } 00:17:22.235 } 00:17:22.235 ]' 00:17:22.235 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.494 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.753 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:22.753 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:23.320 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.579 request: 00:17:23.579 { 00:17:23.579 "name": "nvme0", 00:17:23.579 "trtype": "tcp", 00:17:23.579 "traddr": "10.0.0.2", 00:17:23.579 "adrfam": "ipv4", 00:17:23.579 "trsvcid": "4420", 00:17:23.579 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.579 "prchk_reftag": false, 00:17:23.579 "prchk_guard": false, 00:17:23.579 "hdgst": false, 00:17:23.579 "ddgst": false, 00:17:23.579 "dhchap_key": "key3", 00:17:23.579 "allow_unrecognized_csi": false, 00:17:23.579 "method": "bdev_nvme_attach_controller", 00:17:23.579 "req_id": 1 00:17:23.579 } 00:17:23.579 Got JSON-RPC error response 00:17:23.579 response: 00:17:23.579 { 00:17:23.579 "code": -5, 00:17:23.579 "message": "Input/output error" 00:17:23.579 } 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.579 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.838 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.839 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.097 request: 00:17:24.097 { 00:17:24.097 "name": "nvme0", 00:17:24.097 "trtype": "tcp", 00:17:24.097 "traddr": "10.0.0.2", 00:17:24.097 "adrfam": "ipv4", 00:17:24.097 "trsvcid": "4420", 00:17:24.097 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.097 "prchk_reftag": false, 00:17:24.097 "prchk_guard": false, 00:17:24.097 "hdgst": false, 00:17:24.097 "ddgst": false, 00:17:24.097 "dhchap_key": "key3", 00:17:24.097 "allow_unrecognized_csi": false, 00:17:24.097 "method": "bdev_nvme_attach_controller", 00:17:24.097 "req_id": 1 00:17:24.097 } 00:17:24.097 Got JSON-RPC error response 00:17:24.097 response: 00:17:24.097 { 00:17:24.097 "code": -5, 00:17:24.097 "message": "Input/output error" 00:17:24.097 } 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.097 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.356 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:24.615 request: 00:17:24.615 { 00:17:24.615 "name": "nvme0", 00:17:24.615 "trtype": "tcp", 00:17:24.615 "traddr": "10.0.0.2", 00:17:24.615 "adrfam": "ipv4", 00:17:24.615 "trsvcid": "4420", 00:17:24.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.615 "prchk_reftag": false, 00:17:24.615 "prchk_guard": false, 00:17:24.615 "hdgst": false, 00:17:24.615 "ddgst": false, 00:17:24.615 "dhchap_key": "key0", 00:17:24.615 "dhchap_ctrlr_key": "key1", 00:17:24.615 "allow_unrecognized_csi": false, 00:17:24.615 "method": "bdev_nvme_attach_controller", 00:17:24.615 "req_id": 1 00:17:24.615 } 00:17:24.615 Got JSON-RPC error response 00:17:24.615 response: 00:17:24.615 { 00:17:24.615 "code": -5, 00:17:24.615 "message": "Input/output error" 00:17:24.615 } 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:24.615 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:24.874 nvme0n1 00:17:24.874 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:24.874 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.874 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:25.133 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.133 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.133 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.391 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:25.391 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.391 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.391 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.392 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:25.392 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:25.392 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:25.957 nvme0n1 00:17:25.957 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:25.957 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:25.957 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:26.216 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.473 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.474 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:26.474 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: --dhchap-ctrl-secret DHHC-1:03:NmIxOTg0NjQ4NjRiNzQ5ZDJiZjE1NzE5ZDZkYWFiOTdlNjJlZTEwNzZkZTc5ZmJiYmQzZWJhNTU1NDdkZDhiM5jvBEc=: 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.041 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.299 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:27.300 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.300 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:27.300 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:27.300 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:27.558 request: 00:17:27.558 { 00:17:27.558 "name": "nvme0", 00:17:27.558 "trtype": "tcp", 00:17:27.558 "traddr": "10.0.0.2", 00:17:27.558 "adrfam": "ipv4", 00:17:27.558 "trsvcid": "4420", 00:17:27.558 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.558 "prchk_reftag": false, 00:17:27.558 "prchk_guard": false, 00:17:27.558 "hdgst": false, 00:17:27.558 "ddgst": false, 00:17:27.558 "dhchap_key": "key1", 00:17:27.558 "allow_unrecognized_csi": false, 00:17:27.558 "method": "bdev_nvme_attach_controller", 00:17:27.558 "req_id": 1 00:17:27.558 } 00:17:27.558 Got JSON-RPC error response 00:17:27.558 response: 00:17:27.558 { 00:17:27.558 "code": -5, 00:17:27.558 "message": "Input/output error" 00:17:27.558 } 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.558 17:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:28.494 nvme0n1 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.494 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.752 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:28.753 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:29.011 nvme0n1 00:17:29.011 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:29.011 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:29.011 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.270 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.270 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.270 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: '' 2s 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: ]] 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2MxZjFhNzIxNzIzYWJkYjg4MDg4MGVkMTI5MTkyNjGqOV/b: 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:29.528 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: 2s 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: ]] 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDFiNjhjZGZmZjJkZWRhM2ZkN2E0ZDgyODIyZDM1MzlhNWE2MDY1MWM0ODhkNjM5ZZRQHQ==: 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:31.430 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.965 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:34.224 nvme0n1 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.224 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.791 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:34.791 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:34.791 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:35.049 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:35.307 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:35.307 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:35.307 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.307 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.307 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.308 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:35.874 request: 00:17:35.874 { 00:17:35.874 "name": "nvme0", 00:17:35.874 "dhchap_key": "key1", 00:17:35.874 "dhchap_ctrlr_key": "key3", 00:17:35.874 "method": "bdev_nvme_set_keys", 00:17:35.874 "req_id": 1 00:17:35.874 } 00:17:35.874 Got JSON-RPC error response 00:17:35.874 response: 00:17:35.874 { 00:17:35.874 "code": -13, 00:17:35.874 "message": "Permission denied" 00:17:35.874 } 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:35.874 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.132 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:36.132 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:37.066 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:37.066 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:37.066 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.325 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.259 nvme0n1 00:17:38.259 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:38.259 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.259 17:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.259 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:38.517 request: 00:17:38.517 { 00:17:38.517 "name": "nvme0", 00:17:38.517 "dhchap_key": "key2", 00:17:38.517 "dhchap_ctrlr_key": "key0", 00:17:38.517 "method": "bdev_nvme_set_keys", 00:17:38.517 "req_id": 1 00:17:38.517 } 00:17:38.517 Got JSON-RPC error response 00:17:38.517 response: 00:17:38.517 { 00:17:38.517 "code": -13, 00:17:38.517 "message": "Permission denied" 00:17:38.517 } 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:38.517 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.774 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:38.774 17:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:39.707 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:39.707 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:39.707 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2474987 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2474987 ']' 00:17:39.964 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2474987 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474987 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474987' 00:17:39.965 killing process with pid 2474987 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2474987 00:17:39.965 17:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2474987 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.222 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.222 rmmod nvme_tcp 00:17:40.222 rmmod nvme_fabrics 00:17:40.481 rmmod nvme_keyring 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2496412 ']' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2496412 ']' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496412' 00:17:40.481 killing process with pid 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2496412 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.481 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RF9 /tmp/spdk.key-sha256.leL /tmp/spdk.key-sha384.lOS /tmp/spdk.key-sha512.z04 /tmp/spdk.key-sha512.vng /tmp/spdk.key-sha384.xok /tmp/spdk.key-sha256.ZdG '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:43.015 00:17:43.015 real 2m31.510s 00:17:43.015 user 5m49.079s 00:17:43.015 sys 0m24.191s 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.015 ************************************ 00:17:43.015 END TEST nvmf_auth_target 00:17:43.015 ************************************ 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.015 ************************************ 00:17:43.015 START TEST nvmf_bdevio_no_huge 00:17:43.015 ************************************ 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:43.015 * Looking for test storage... 00:17:43.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.015 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.016 --rc genhtml_branch_coverage=1 00:17:43.016 --rc genhtml_function_coverage=1 00:17:43.016 --rc genhtml_legend=1 00:17:43.016 --rc geninfo_all_blocks=1 00:17:43.016 --rc geninfo_unexecuted_blocks=1 00:17:43.016 00:17:43.016 ' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.016 --rc genhtml_branch_coverage=1 00:17:43.016 --rc genhtml_function_coverage=1 00:17:43.016 --rc genhtml_legend=1 00:17:43.016 --rc geninfo_all_blocks=1 00:17:43.016 --rc geninfo_unexecuted_blocks=1 00:17:43.016 00:17:43.016 ' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.016 --rc genhtml_branch_coverage=1 00:17:43.016 --rc genhtml_function_coverage=1 00:17:43.016 --rc genhtml_legend=1 00:17:43.016 --rc geninfo_all_blocks=1 00:17:43.016 --rc geninfo_unexecuted_blocks=1 00:17:43.016 00:17:43.016 ' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.016 --rc genhtml_branch_coverage=1 00:17:43.016 --rc genhtml_function_coverage=1 00:17:43.016 --rc genhtml_legend=1 00:17:43.016 --rc geninfo_all_blocks=1 00:17:43.016 --rc geninfo_unexecuted_blocks=1 00:17:43.016 00:17:43.016 ' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.016 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.017 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:49.595 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:49.595 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:49.595 Found net devices under 0000:86:00.0: cvl_0_0 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:49.595 Found net devices under 0000:86:00.1: cvl_0_1 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:49.595 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:49.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:17:49.596 00:17:49.596 --- 10.0.0.2 ping statistics --- 00:17:49.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.596 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:17:49.596 00:17:49.596 --- 10.0.0.1 ping statistics --- 00:17:49.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.596 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2503797 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2503797 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2503797 ']' 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.596 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.596 [2024-11-20 17:12:06.893846] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:17:49.596 [2024-11-20 17:12:06.893890] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:49.596 [2024-11-20 17:12:06.962857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.596 [2024-11-20 17:12:07.009595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.596 [2024-11-20 17:12:07.009626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.596 [2024-11-20 17:12:07.009634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.596 [2024-11-20 17:12:07.009642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.596 [2024-11-20 17:12:07.009647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.596 [2024-11-20 17:12:07.014223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.596 [2024-11-20 17:12:07.014312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:49.596 [2024-11-20 17:12:07.014415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.596 [2024-11-20 17:12:07.014416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 [2024-11-20 17:12:07.755474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 Malloc0 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.855 [2024-11-20 17:12:07.799757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.855 { 00:17:49.855 "params": { 00:17:49.855 "name": "Nvme$subsystem", 00:17:49.855 "trtype": "$TEST_TRANSPORT", 00:17:49.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.855 "adrfam": "ipv4", 00:17:49.855 "trsvcid": "$NVMF_PORT", 00:17:49.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.855 "hdgst": ${hdgst:-false}, 00:17:49.855 "ddgst": ${ddgst:-false} 00:17:49.855 }, 00:17:49.855 "method": "bdev_nvme_attach_controller" 00:17:49.855 } 00:17:49.855 EOF 00:17:49.855 )") 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:49.855 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:49.855 "params": { 00:17:49.855 "name": "Nvme1", 00:17:49.855 "trtype": "tcp", 00:17:49.855 "traddr": "10.0.0.2", 00:17:49.855 "adrfam": "ipv4", 00:17:49.855 "trsvcid": "4420", 00:17:49.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.855 "hdgst": false, 00:17:49.856 "ddgst": false 00:17:49.856 }, 00:17:49.856 "method": "bdev_nvme_attach_controller" 00:17:49.856 }' 00:17:49.856 [2024-11-20 17:12:07.853092] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:17:49.856 [2024-11-20 17:12:07.853144] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2503872 ] 00:17:50.114 [2024-11-20 17:12:07.936101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:50.114 [2024-11-20 17:12:07.984413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.114 [2024-11-20 17:12:07.984518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.114 [2024-11-20 17:12:07.984519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.372 I/O targets: 00:17:50.372 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:50.372 00:17:50.372 00:17:50.372 CUnit - A unit testing framework for C - Version 2.1-3 00:17:50.372 http://cunit.sourceforge.net/ 00:17:50.372 00:17:50.372 00:17:50.372 Suite: bdevio tests on: Nvme1n1 00:17:50.372 Test: blockdev write read block ...passed 00:17:50.372 Test: blockdev write zeroes read block ...passed 00:17:50.372 Test: blockdev write zeroes read no split ...passed 00:17:50.372 Test: blockdev write zeroes read split ...passed 00:17:50.372 Test: blockdev write zeroes read split partial ...passed 00:17:50.372 Test: blockdev reset ...[2024-11-20 17:12:08.314813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:50.372 [2024-11-20 17:12:08.314875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6920 (9): Bad file descriptor 00:17:50.372 [2024-11-20 17:12:08.327716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:50.372 passed 00:17:50.372 Test: blockdev write read 8 blocks ...passed 00:17:50.372 Test: blockdev write read size > 128k ...passed 00:17:50.372 Test: blockdev write read invalid size ...passed 00:17:50.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:50.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:50.372 Test: blockdev write read max offset ...passed 00:17:50.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:50.649 Test: blockdev writev readv 8 blocks ...passed 00:17:50.649 Test: blockdev writev readv 30 x 1block ...passed 00:17:50.649 Test: blockdev writev readv block ...passed 00:17:50.649 Test: blockdev writev readv size > 128k ...passed 00:17:50.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:50.649 Test: blockdev comparev and writev ...[2024-11-20 17:12:08.540403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.540451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.540694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.540715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.540955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.540976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.540982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.541219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.541228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.541240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.649 [2024-11-20 17:12:08.541246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.649 passed 00:17:50.649 Test: blockdev nvme passthru rw ...passed 00:17:50.649 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:12:08.623588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.649 [2024-11-20 17:12:08.623605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.649 [2024-11-20 17:12:08.623710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.649 [2024-11-20 17:12:08.623720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.650 [2024-11-20 17:12:08.623840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.650 [2024-11-20 17:12:08.623849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.650 [2024-11-20 17:12:08.623968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.650 [2024-11-20 17:12:08.623978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.650 passed 00:17:50.650 Test: blockdev nvme admin passthru ...passed 00:17:50.650 Test: blockdev copy ...passed 00:17:50.650 00:17:50.650 Run Summary: Type Total Ran Passed Failed Inactive 00:17:50.650 suites 1 1 n/a 0 0 00:17:50.650 tests 23 23 23 0 0 00:17:50.650 asserts 152 152 152 0 n/a 00:17:50.650 00:17:50.650 Elapsed time = 0.984 seconds 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.908 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:51.166 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.166 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:51.166 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.166 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.166 rmmod nvme_tcp 00:17:51.166 rmmod nvme_fabrics 00:17:51.166 rmmod nvme_keyring 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2503797 ']' 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2503797 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2503797 ']' 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2503797 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503797 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503797' 00:17:51.166 killing process with pid 2503797 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2503797 00:17:51.166 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2503797 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.425 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.969 00:17:53.969 real 0m10.788s 00:17:53.969 user 0m13.005s 00:17:53.969 sys 0m5.362s 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 ************************************ 00:17:53.969 END TEST nvmf_bdevio_no_huge 00:17:53.969 ************************************ 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 ************************************ 00:17:53.969 START TEST nvmf_tls 00:17:53.969 ************************************ 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:53.969 * Looking for test storage... 00:17:53.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:53.969 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.970 --rc genhtml_branch_coverage=1 00:17:53.970 --rc genhtml_function_coverage=1 00:17:53.970 --rc genhtml_legend=1 00:17:53.970 --rc geninfo_all_blocks=1 00:17:53.970 --rc geninfo_unexecuted_blocks=1 00:17:53.970 00:17:53.970 ' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.970 --rc genhtml_branch_coverage=1 00:17:53.970 --rc genhtml_function_coverage=1 00:17:53.970 --rc genhtml_legend=1 00:17:53.970 --rc geninfo_all_blocks=1 00:17:53.970 --rc geninfo_unexecuted_blocks=1 00:17:53.970 00:17:53.970 ' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.970 --rc genhtml_branch_coverage=1 00:17:53.970 --rc genhtml_function_coverage=1 00:17:53.970 --rc genhtml_legend=1 00:17:53.970 --rc geninfo_all_blocks=1 00:17:53.970 --rc geninfo_unexecuted_blocks=1 00:17:53.970 00:17:53.970 ' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.970 --rc genhtml_branch_coverage=1 00:17:53.970 --rc genhtml_function_coverage=1 00:17:53.970 --rc genhtml_legend=1 00:17:53.970 --rc geninfo_all_blocks=1 00:17:53.970 --rc geninfo_unexecuted_blocks=1 00:17:53.970 00:17:53.970 ' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.970 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:00.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:00.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:00.542 Found net devices under 0000:86:00.0: cvl_0_0 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:00.542 Found net devices under 0000:86:00.1: cvl_0_1 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.542 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:18:00.543 00:18:00.543 --- 10.0.0.2 ping statistics --- 00:18:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.543 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:18:00.543 00:18:00.543 --- 10.0.0.1 ping statistics --- 00:18:00.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.543 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2507607 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2507607 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2507607 ']' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.543 [2024-11-20 17:12:17.737270] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:00.543 [2024-11-20 17:12:17.737316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.543 [2024-11-20 17:12:17.817824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.543 [2024-11-20 17:12:17.860830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.543 [2024-11-20 17:12:17.860864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.543 [2024-11-20 17:12:17.860871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.543 [2024-11-20 17:12:17.860877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.543 [2024-11-20 17:12:17.860883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.543 [2024-11-20 17:12:17.861445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:00.543 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:00.543 true 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.543 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:00.802 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:00.802 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:00.802 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:01.061 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.061 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:01.061 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:01.061 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:01.061 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.061 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:01.323 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:01.323 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:01.323 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:01.584 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:01.584 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.584 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:01.584 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:01.584 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:01.843 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.843 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NjICG0R8JH 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.y1KGq1kASY 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NjICG0R8JH 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.y1KGq1kASY 00:18:02.103 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.362 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:02.621 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NjICG0R8JH 00:18:02.621 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NjICG0R8JH 00:18:02.621 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.880 [2024-11-20 17:12:20.745564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.880 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.140 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:03.140 [2024-11-20 17:12:21.146577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.140 [2024-11-20 17:12:21.146796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.140 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.398 malloc0 00:18:03.399 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.658 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NjICG0R8JH 00:18:03.917 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.918 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NjICG0R8JH 00:18:14.054 Initializing NVMe Controllers 00:18:14.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:14.054 Initialization complete. Launching workers. 00:18:14.054 ======================================================== 00:18:14.054 Latency(us) 00:18:14.054 Device Information : IOPS MiB/s Average min max 00:18:14.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16722.76 65.32 3827.17 814.31 4980.32 00:18:14.054 ======================================================== 00:18:14.054 Total : 16722.76 65.32 3827.17 814.31 4980.32 00:18:14.054 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjICG0R8JH 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NjICG0R8JH 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2510140 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2510140 /var/tmp/bdevperf.sock 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2510140 ']' 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.054 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.313 [2024-11-20 17:12:32.117702] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:14.313 [2024-11-20 17:12:32.117755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2510140 ] 00:18:14.313 [2024-11-20 17:12:32.190088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.313 [2024-11-20 17:12:32.230029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.313 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.313 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.313 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NjICG0R8JH 00:18:14.572 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.830 [2024-11-20 17:12:32.672928] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.830 TLSTESTn1 00:18:14.830 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.830 Running I/O for 10 seconds... 00:18:17.143 5396.00 IOPS, 21.08 MiB/s [2024-11-20T16:12:36.122Z] 5517.00 IOPS, 21.55 MiB/s [2024-11-20T16:12:37.057Z] 5539.67 IOPS, 21.64 MiB/s [2024-11-20T16:12:37.993Z] 5560.50 IOPS, 21.72 MiB/s [2024-11-20T16:12:38.928Z] 5586.60 IOPS, 21.82 MiB/s [2024-11-20T16:12:40.303Z] 5539.00 IOPS, 21.64 MiB/s [2024-11-20T16:12:41.238Z] 5539.57 IOPS, 21.64 MiB/s [2024-11-20T16:12:42.175Z] 5517.88 IOPS, 21.55 MiB/s [2024-11-20T16:12:43.175Z] 5532.89 IOPS, 21.61 MiB/s [2024-11-20T16:12:43.175Z] 5539.60 IOPS, 21.64 MiB/s 00:18:25.132 Latency(us) 00:18:25.132 [2024-11-20T16:12:43.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.132 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.132 Verification LBA range: start 0x0 length 0x2000 00:18:25.132 TLSTESTn1 : 10.01 5545.33 21.66 0.00 0.00 23049.56 5180.46 24341.94 00:18:25.132 [2024-11-20T16:12:43.175Z] =================================================================================================================== 00:18:25.132 [2024-11-20T16:12:43.175Z] Total : 5545.33 21.66 0.00 0.00 23049.56 5180.46 24341.94 00:18:25.132 { 00:18:25.132 "results": [ 00:18:25.132 { 00:18:25.132 "job": "TLSTESTn1", 00:18:25.132 "core_mask": "0x4", 00:18:25.132 "workload": "verify", 00:18:25.132 "status": "finished", 00:18:25.132 "verify_range": { 00:18:25.132 "start": 0, 00:18:25.132 "length": 8192 00:18:25.132 }, 00:18:25.132 "queue_depth": 128, 00:18:25.132 "io_size": 4096, 00:18:25.132 "runtime": 10.012387, 00:18:25.132 "iops": 5545.330998492168, 00:18:25.132 "mibps": 21.66144921286003, 00:18:25.132 "io_failed": 0, 00:18:25.132 "io_timeout": 0, 00:18:25.132 "avg_latency_us": 23049.562855736294, 00:18:25.132 "min_latency_us": 5180.464761904762, 00:18:25.132 "max_latency_us": 24341.942857142858 00:18:25.132 } 00:18:25.132 ], 00:18:25.132 "core_count": 1 00:18:25.132 } 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2510140 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2510140 ']' 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2510140 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2510140 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2510140' 00:18:25.132 killing process with pid 2510140 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2510140 00:18:25.132 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.132 00:18:25.132 Latency(us) 00:18:25.132 [2024-11-20T16:12:43.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.132 [2024-11-20T16:12:43.175Z] =================================================================================================================== 00:18:25.132 [2024-11-20T16:12:43.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.132 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2510140 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y1KGq1kASY 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y1KGq1kASY 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y1KGq1kASY 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y1KGq1kASY 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.132 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2511791 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2511791 /var/tmp/bdevperf.sock 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2511791 ']' 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.133 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.391 [2024-11-20 17:12:43.174580] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:25.391 [2024-11-20 17:12:43.174632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511791 ] 00:18:25.391 [2024-11-20 17:12:43.251137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.391 [2024-11-20 17:12:43.290052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.391 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.391 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.391 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y1KGq1kASY 00:18:25.648 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.907 [2024-11-20 17:12:43.761131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.907 [2024-11-20 17:12:43.765898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:25.907 [2024-11-20 17:12:43.766413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2567170 (107): Transport endpoint is not connected 00:18:25.907 [2024-11-20 17:12:43.767406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2567170 (9): Bad file descriptor 00:18:25.907 [2024-11-20 17:12:43.768407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:25.907 [2024-11-20 17:12:43.768417] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:25.907 [2024-11-20 17:12:43.768424] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:25.907 [2024-11-20 17:12:43.768434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:25.907 request: 00:18:25.907 { 00:18:25.907 "name": "TLSTEST", 00:18:25.907 "trtype": "tcp", 00:18:25.907 "traddr": "10.0.0.2", 00:18:25.907 "adrfam": "ipv4", 00:18:25.907 "trsvcid": "4420", 00:18:25.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.907 "prchk_reftag": false, 00:18:25.907 "prchk_guard": false, 00:18:25.907 "hdgst": false, 00:18:25.907 "ddgst": false, 00:18:25.907 "psk": "key0", 00:18:25.907 "allow_unrecognized_csi": false, 00:18:25.907 "method": "bdev_nvme_attach_controller", 00:18:25.907 "req_id": 1 00:18:25.907 } 00:18:25.907 Got JSON-RPC error response 00:18:25.907 response: 00:18:25.907 { 00:18:25.907 "code": -5, 00:18:25.907 "message": "Input/output error" 00:18:25.907 } 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2511791 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2511791 ']' 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2511791 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511791 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511791' 00:18:25.907 killing process with pid 2511791 00:18:25.907 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2511791 00:18:25.907 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.907 00:18:25.907 Latency(us) 00:18:25.907 [2024-11-20T16:12:43.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.907 [2024-11-20T16:12:43.950Z] =================================================================================================================== 00:18:25.907 [2024-11-20T16:12:43.951Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.908 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2511791 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjICG0R8JH 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjICG0R8JH 00:18:26.167 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjICG0R8JH 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NjICG0R8JH 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2512023 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2512023 /var/tmp/bdevperf.sock 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2512023 ']' 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.167 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.167 [2024-11-20 17:12:44.049066] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:26.167 [2024-11-20 17:12:44.049115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512023 ] 00:18:26.167 [2024-11-20 17:12:44.116212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.167 [2024-11-20 17:12:44.152546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.425 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.425 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.425 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NjICG0R8JH 00:18:26.425 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:26.683 [2024-11-20 17:12:44.587718] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.684 [2024-11-20 17:12:44.597887] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.684 [2024-11-20 17:12:44.597908] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.684 [2024-11-20 17:12:44.597930] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.684 [2024-11-20 17:12:44.598119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562170 (107): Transport endpoint is not connected 00:18:26.684 [2024-11-20 17:12:44.599113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562170 (9): Bad file descriptor 00:18:26.684 [2024-11-20 17:12:44.600115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:26.684 [2024-11-20 17:12:44.600125] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.684 [2024-11-20 17:12:44.600132] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:26.684 [2024-11-20 17:12:44.600142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:26.684 request: 00:18:26.684 { 00:18:26.684 "name": "TLSTEST", 00:18:26.684 "trtype": "tcp", 00:18:26.684 "traddr": "10.0.0.2", 00:18:26.684 "adrfam": "ipv4", 00:18:26.684 "trsvcid": "4420", 00:18:26.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:26.684 "prchk_reftag": false, 00:18:26.684 "prchk_guard": false, 00:18:26.684 "hdgst": false, 00:18:26.684 "ddgst": false, 00:18:26.684 "psk": "key0", 00:18:26.684 "allow_unrecognized_csi": false, 00:18:26.684 "method": "bdev_nvme_attach_controller", 00:18:26.684 "req_id": 1 00:18:26.684 } 00:18:26.684 Got JSON-RPC error response 00:18:26.684 response: 00:18:26.684 { 00:18:26.684 "code": -5, 00:18:26.684 "message": "Input/output error" 00:18:26.684 } 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2512023 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2512023 ']' 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2512023 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512023 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512023' 00:18:26.684 killing process with pid 2512023 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2512023 00:18:26.684 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.684 00:18:26.684 Latency(us) 00:18:26.684 [2024-11-20T16:12:44.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.684 [2024-11-20T16:12:44.727Z] =================================================================================================================== 00:18:26.684 [2024-11-20T16:12:44.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.684 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2512023 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjICG0R8JH 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjICG0R8JH 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjICG0R8JH 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NjICG0R8JH 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2512190 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2512190 /var/tmp/bdevperf.sock 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2512190 ']' 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.943 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.943 [2024-11-20 17:12:44.887051] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:26.943 [2024-11-20 17:12:44.887097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512190 ] 00:18:26.943 [2024-11-20 17:12:44.963692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.201 [2024-11-20 17:12:45.003718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.201 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.201 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.201 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NjICG0R8JH 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.459 [2024-11-20 17:12:45.442643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.459 [2024-11-20 17:12:45.450088] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:27.459 [2024-11-20 17:12:45.450109] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:27.459 [2024-11-20 17:12:45.450132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:27.459 [2024-11-20 17:12:45.450970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd170 (107): Transport endpoint is not connected 00:18:27.459 [2024-11-20 17:12:45.451964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd170 (9): Bad file descriptor 00:18:27.459 [2024-11-20 17:12:45.452966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:27.459 [2024-11-20 17:12:45.452975] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:27.459 [2024-11-20 17:12:45.452982] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:27.459 [2024-11-20 17:12:45.452991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:27.459 request: 00:18:27.459 { 00:18:27.459 "name": "TLSTEST", 00:18:27.459 "trtype": "tcp", 00:18:27.459 "traddr": "10.0.0.2", 00:18:27.459 "adrfam": "ipv4", 00:18:27.459 "trsvcid": "4420", 00:18:27.459 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:27.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.459 "prchk_reftag": false, 00:18:27.459 "prchk_guard": false, 00:18:27.459 "hdgst": false, 00:18:27.459 "ddgst": false, 00:18:27.459 "psk": "key0", 00:18:27.459 "allow_unrecognized_csi": false, 00:18:27.459 "method": "bdev_nvme_attach_controller", 00:18:27.459 "req_id": 1 00:18:27.459 } 00:18:27.459 Got JSON-RPC error response 00:18:27.459 response: 00:18:27.459 { 00:18:27.459 "code": -5, 00:18:27.459 "message": "Input/output error" 00:18:27.459 } 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2512190 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2512190 ']' 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2512190 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.459 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512190 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512190' 00:18:27.718 killing process with pid 2512190 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2512190 00:18:27.718 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.718 00:18:27.718 Latency(us) 00:18:27.718 [2024-11-20T16:12:45.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.718 [2024-11-20T16:12:45.761Z] =================================================================================================================== 00:18:27.718 [2024-11-20T16:12:45.761Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2512190 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2512270 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2512270 /var/tmp/bdevperf.sock 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2512270 ']' 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.718 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.718 [2024-11-20 17:12:45.735959] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:27.718 [2024-11-20 17:12:45.736009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512270 ] 00:18:27.976 [2024-11-20 17:12:45.807821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.977 [2024-11-20 17:12:45.844645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.977 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.977 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.977 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:28.235 [2024-11-20 17:12:46.107949] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:28.235 [2024-11-20 17:12:46.107983] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:28.235 request: 00:18:28.235 { 00:18:28.235 "name": "key0", 00:18:28.235 "path": "", 00:18:28.235 "method": "keyring_file_add_key", 00:18:28.235 "req_id": 1 00:18:28.235 } 00:18:28.235 Got JSON-RPC error response 00:18:28.235 response: 00:18:28.235 { 00:18:28.235 "code": -1, 00:18:28.235 "message": "Operation not permitted" 00:18:28.235 } 00:18:28.235 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.494 [2024-11-20 17:12:46.316582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.494 [2024-11-20 17:12:46.316609] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:28.494 request: 00:18:28.494 { 00:18:28.494 "name": "TLSTEST", 00:18:28.494 "trtype": "tcp", 00:18:28.494 "traddr": "10.0.0.2", 00:18:28.494 "adrfam": "ipv4", 00:18:28.494 "trsvcid": "4420", 00:18:28.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.494 "prchk_reftag": false, 00:18:28.494 "prchk_guard": false, 00:18:28.494 "hdgst": false, 00:18:28.494 "ddgst": false, 00:18:28.494 "psk": "key0", 00:18:28.494 "allow_unrecognized_csi": false, 00:18:28.494 "method": "bdev_nvme_attach_controller", 00:18:28.494 "req_id": 1 00:18:28.494 } 00:18:28.494 Got JSON-RPC error response 00:18:28.494 response: 00:18:28.494 { 00:18:28.494 "code": -126, 00:18:28.494 "message": "Required key not available" 00:18:28.494 } 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2512270 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2512270 ']' 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2512270 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512270 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512270' 00:18:28.494 killing process with pid 2512270 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2512270 00:18:28.494 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.494 00:18:28.494 Latency(us) 00:18:28.494 [2024-11-20T16:12:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.494 [2024-11-20T16:12:46.537Z] =================================================================================================================== 00:18:28.494 [2024-11-20T16:12:46.537Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.494 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2512270 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2507607 ']' 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507607' 00:18:28.754 killing process with pid 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2507607 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:28.754 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.f36Fv1oopb 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.f36Fv1oopb 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2512514 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2512514 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2512514 ']' 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.013 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.013 [2024-11-20 17:12:46.871262] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:29.013 [2024-11-20 17:12:46.871313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.013 [2024-11-20 17:12:46.948046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.013 [2024-11-20 17:12:46.987710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.013 [2024-11-20 17:12:46.987747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.013 [2024-11-20 17:12:46.987754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.013 [2024-11-20 17:12:46.987760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.013 [2024-11-20 17:12:46.987765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.013 [2024-11-20 17:12:46.988370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f36Fv1oopb 00:18:29.272 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.272 [2024-11-20 17:12:47.292037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.530 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.530 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.789 [2024-11-20 17:12:47.705100] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.789 [2024-11-20 17:12:47.705336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.789 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.048 malloc0 00:18:30.048 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.306 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:30.306 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f36Fv1oopb 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f36Fv1oopb 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2512776 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2512776 /var/tmp/bdevperf.sock 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2512776 ']' 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.564 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.564 [2024-11-20 17:12:48.557340] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:30.564 [2024-11-20 17:12:48.557389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512776 ] 00:18:30.823 [2024-11-20 17:12:48.633512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.823 [2024-11-20 17:12:48.674343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.823 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.823 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.823 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:31.081 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.339 [2024-11-20 17:12:49.161535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.339 TLSTESTn1 00:18:31.339 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.339 Running I/O for 10 seconds... 00:18:33.650 4844.00 IOPS, 18.92 MiB/s [2024-11-20T16:12:52.643Z] 5170.50 IOPS, 20.20 MiB/s [2024-11-20T16:12:53.580Z] 5318.00 IOPS, 20.77 MiB/s [2024-11-20T16:12:54.516Z] 5353.25 IOPS, 20.91 MiB/s [2024-11-20T16:12:55.473Z] 5369.00 IOPS, 20.97 MiB/s [2024-11-20T16:12:56.408Z] 5407.50 IOPS, 21.12 MiB/s [2024-11-20T16:12:57.784Z] 5380.29 IOPS, 21.02 MiB/s [2024-11-20T16:12:58.720Z] 5306.12 IOPS, 20.73 MiB/s [2024-11-20T16:12:59.656Z] 5271.00 IOPS, 20.59 MiB/s [2024-11-20T16:12:59.656Z] 5248.60 IOPS, 20.50 MiB/s 00:18:41.613 Latency(us) 00:18:41.613 [2024-11-20T16:12:59.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.613 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.613 Verification LBA range: start 0x0 length 0x2000 00:18:41.613 TLSTESTn1 : 10.02 5252.83 20.52 0.00 0.00 24332.47 6147.90 31831.77 00:18:41.613 [2024-11-20T16:12:59.656Z] =================================================================================================================== 00:18:41.613 [2024-11-20T16:12:59.656Z] Total : 5252.83 20.52 0.00 0.00 24332.47 6147.90 31831.77 00:18:41.613 { 00:18:41.613 "results": [ 00:18:41.613 { 00:18:41.613 "job": "TLSTESTn1", 00:18:41.613 "core_mask": "0x4", 00:18:41.613 "workload": "verify", 00:18:41.613 "status": "finished", 00:18:41.613 "verify_range": { 00:18:41.613 "start": 0, 00:18:41.613 "length": 8192 00:18:41.613 }, 00:18:41.613 "queue_depth": 128, 00:18:41.613 "io_size": 4096, 00:18:41.613 "runtime": 10.016313, 00:18:41.613 "iops": 5252.8310566972095, 00:18:41.613 "mibps": 20.518871315223475, 00:18:41.613 "io_failed": 0, 00:18:41.613 "io_timeout": 0, 00:18:41.613 "avg_latency_us": 24332.4718483402, 00:18:41.613 "min_latency_us": 6147.900952380953, 00:18:41.613 "max_latency_us": 31831.77142857143 00:18:41.613 } 00:18:41.613 ], 00:18:41.613 "core_count": 1 00:18:41.613 } 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2512776 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2512776 ']' 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2512776 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512776 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512776' 00:18:41.613 killing process with pid 2512776 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2512776 00:18:41.613 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.613 00:18:41.613 Latency(us) 00:18:41.613 [2024-11-20T16:12:59.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.613 [2024-11-20T16:12:59.656Z] =================================================================================================================== 00:18:41.613 [2024-11-20T16:12:59.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2512776 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.f36Fv1oopb 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f36Fv1oopb 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f36Fv1oopb 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f36Fv1oopb 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f36Fv1oopb 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2514609 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2514609 /var/tmp/bdevperf.sock 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2514609 ']' 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.613 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.872 [2024-11-20 17:12:59.667158] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:41.872 [2024-11-20 17:12:59.667216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514609 ] 00:18:41.872 [2024-11-20 17:12:59.731477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.872 [2024-11-20 17:12:59.768056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.872 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.872 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.872 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:42.129 [2024-11-20 17:13:00.034489] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.f36Fv1oopb': 0100666 00:18:42.129 [2024-11-20 17:13:00.034522] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:42.129 request: 00:18:42.129 { 00:18:42.129 "name": "key0", 00:18:42.129 "path": "/tmp/tmp.f36Fv1oopb", 00:18:42.129 "method": "keyring_file_add_key", 00:18:42.129 "req_id": 1 00:18:42.129 } 00:18:42.129 Got JSON-RPC error response 00:18:42.129 response: 00:18:42.129 { 00:18:42.129 "code": -1, 00:18:42.129 "message": "Operation not permitted" 00:18:42.129 } 00:18:42.129 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.388 [2024-11-20 17:13:00.231078] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.388 [2024-11-20 17:13:00.231120] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:42.388 request: 00:18:42.388 { 00:18:42.388 "name": "TLSTEST", 00:18:42.388 "trtype": "tcp", 00:18:42.388 "traddr": "10.0.0.2", 00:18:42.388 "adrfam": "ipv4", 00:18:42.388 "trsvcid": "4420", 00:18:42.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.388 "prchk_reftag": false, 00:18:42.388 "prchk_guard": false, 00:18:42.388 "hdgst": false, 00:18:42.388 "ddgst": false, 00:18:42.388 "psk": "key0", 00:18:42.388 "allow_unrecognized_csi": false, 00:18:42.388 "method": "bdev_nvme_attach_controller", 00:18:42.388 "req_id": 1 00:18:42.388 } 00:18:42.388 Got JSON-RPC error response 00:18:42.388 response: 00:18:42.388 { 00:18:42.388 "code": -126, 00:18:42.388 "message": "Required key not available" 00:18:42.388 } 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2514609 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2514609 ']' 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2514609 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514609 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514609' 00:18:42.388 killing process with pid 2514609 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2514609 00:18:42.388 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.388 00:18:42.388 Latency(us) 00:18:42.388 [2024-11-20T16:13:00.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.388 [2024-11-20T16:13:00.431Z] =================================================================================================================== 00:18:42.388 [2024-11-20T16:13:00.431Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.388 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2514609 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2512514 ']' 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2512514' 00:18:42.647 killing process with pid 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2512514 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.647 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2514848 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2514848 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2514848 ']' 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.906 [2024-11-20 17:13:00.722654] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:42.906 [2024-11-20 17:13:00.722701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.906 [2024-11-20 17:13:00.799102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.906 [2024-11-20 17:13:00.834610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.906 [2024-11-20 17:13:00.834645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.906 [2024-11-20 17:13:00.834651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.906 [2024-11-20 17:13:00.834657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.906 [2024-11-20 17:13:00.834662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.906 [2024-11-20 17:13:00.835254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.906 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:18:43.164 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f36Fv1oopb 00:18:43.165 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.165 [2024-11-20 17:13:01.151082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.165 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.423 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.682 [2024-11-20 17:13:01.508009] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.682 [2024-11-20 17:13:01.508245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.682 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:43.682 malloc0 00:18:43.682 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.939 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:44.196 [2024-11-20 17:13:02.073516] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.f36Fv1oopb': 0100666 00:18:44.196 [2024-11-20 17:13:02.073545] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:44.196 request: 00:18:44.196 { 00:18:44.196 "name": "key0", 00:18:44.196 "path": "/tmp/tmp.f36Fv1oopb", 00:18:44.196 "method": "keyring_file_add_key", 00:18:44.196 "req_id": 1 00:18:44.196 } 00:18:44.196 Got JSON-RPC error response 00:18:44.196 response: 00:18:44.196 { 00:18:44.196 "code": -1, 00:18:44.196 "message": "Operation not permitted" 00:18:44.196 } 00:18:44.197 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.455 [2024-11-20 17:13:02.245995] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:44.455 [2024-11-20 17:13:02.246028] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:44.455 request: 00:18:44.455 { 00:18:44.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.455 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.455 "psk": "key0", 00:18:44.455 "method": "nvmf_subsystem_add_host", 00:18:44.455 "req_id": 1 00:18:44.455 } 00:18:44.455 Got JSON-RPC error response 00:18:44.455 response: 00:18:44.455 { 00:18:44.455 "code": -32603, 00:18:44.455 "message": "Internal error" 00:18:44.455 } 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2514848 ']' 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514848' 00:18:44.455 killing process with pid 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2514848 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.f36Fv1oopb 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.455 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2515117 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2515117 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515117 ']' 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.714 [2024-11-20 17:13:02.533053] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:44.714 [2024-11-20 17:13:02.533096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.714 [2024-11-20 17:13:02.591961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.714 [2024-11-20 17:13:02.632915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.714 [2024-11-20 17:13:02.632948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.714 [2024-11-20 17:13:02.632954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.714 [2024-11-20 17:13:02.632960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.714 [2024-11-20 17:13:02.632965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.714 [2024-11-20 17:13:02.633515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.714 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.972 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.972 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:18:44.972 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f36Fv1oopb 00:18:44.972 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:44.973 [2024-11-20 17:13:02.930091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.973 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.230 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:45.488 [2024-11-20 17:13:03.335138] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.488 [2024-11-20 17:13:03.335365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.488 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.747 malloc0 00:18:45.747 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.747 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:46.005 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2515372 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2515372 /var/tmp/bdevperf.sock 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515372 ']' 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.264 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.264 [2024-11-20 17:13:04.189005] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:46.264 [2024-11-20 17:13:04.189064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515372 ] 00:18:46.264 [2024-11-20 17:13:04.260414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.264 [2024-11-20 17:13:04.300212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.523 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.523 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.523 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:18:46.782 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.782 [2024-11-20 17:13:04.788231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.040 TLSTESTn1 00:18:47.040 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:47.300 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:47.300 "subsystems": [ 00:18:47.300 { 00:18:47.300 "subsystem": "keyring", 00:18:47.300 "config": [ 00:18:47.300 { 00:18:47.300 "method": "keyring_file_add_key", 00:18:47.300 "params": { 00:18:47.300 "name": "key0", 00:18:47.300 "path": "/tmp/tmp.f36Fv1oopb" 00:18:47.300 } 00:18:47.300 } 00:18:47.300 ] 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "subsystem": "iobuf", 00:18:47.300 "config": [ 00:18:47.300 { 00:18:47.300 "method": "iobuf_set_options", 00:18:47.300 "params": { 00:18:47.300 "small_pool_count": 8192, 00:18:47.300 "large_pool_count": 1024, 00:18:47.300 "small_bufsize": 8192, 00:18:47.300 "large_bufsize": 135168, 00:18:47.300 "enable_numa": false 00:18:47.300 } 00:18:47.300 } 00:18:47.300 ] 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "subsystem": "sock", 00:18:47.300 "config": [ 00:18:47.300 { 00:18:47.300 "method": "sock_set_default_impl", 00:18:47.300 "params": { 00:18:47.300 "impl_name": "posix" 00:18:47.300 } 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "method": "sock_impl_set_options", 00:18:47.300 "params": { 00:18:47.300 "impl_name": "ssl", 00:18:47.300 "recv_buf_size": 4096, 00:18:47.300 "send_buf_size": 4096, 00:18:47.300 "enable_recv_pipe": true, 00:18:47.300 "enable_quickack": false, 00:18:47.300 "enable_placement_id": 0, 00:18:47.300 "enable_zerocopy_send_server": true, 00:18:47.300 "enable_zerocopy_send_client": false, 00:18:47.300 "zerocopy_threshold": 0, 00:18:47.300 "tls_version": 0, 00:18:47.300 "enable_ktls": false 00:18:47.300 } 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "method": "sock_impl_set_options", 00:18:47.300 "params": { 00:18:47.300 "impl_name": "posix", 00:18:47.300 "recv_buf_size": 2097152, 00:18:47.300 "send_buf_size": 2097152, 00:18:47.300 "enable_recv_pipe": true, 00:18:47.300 "enable_quickack": false, 00:18:47.300 "enable_placement_id": 0, 00:18:47.300 "enable_zerocopy_send_server": true, 00:18:47.300 "enable_zerocopy_send_client": false, 00:18:47.300 "zerocopy_threshold": 0, 00:18:47.300 "tls_version": 0, 00:18:47.300 "enable_ktls": false 00:18:47.300 } 00:18:47.300 } 00:18:47.300 ] 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "subsystem": "vmd", 00:18:47.300 "config": [] 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "subsystem": "accel", 00:18:47.300 "config": [ 00:18:47.300 { 00:18:47.300 "method": "accel_set_options", 00:18:47.300 "params": { 00:18:47.300 "small_cache_size": 128, 00:18:47.300 "large_cache_size": 16, 00:18:47.300 "task_count": 2048, 00:18:47.300 "sequence_count": 2048, 00:18:47.300 "buf_count": 2048 00:18:47.300 } 00:18:47.300 } 00:18:47.300 ] 00:18:47.300 }, 00:18:47.300 { 00:18:47.300 "subsystem": "bdev", 00:18:47.300 "config": [ 00:18:47.300 { 00:18:47.300 "method": "bdev_set_options", 00:18:47.300 "params": { 00:18:47.300 "bdev_io_pool_size": 65535, 00:18:47.300 "bdev_io_cache_size": 256, 00:18:47.300 "bdev_auto_examine": true, 00:18:47.300 "iobuf_small_cache_size": 128, 00:18:47.301 "iobuf_large_cache_size": 16 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_raid_set_options", 00:18:47.301 "params": { 00:18:47.301 "process_window_size_kb": 1024, 00:18:47.301 "process_max_bandwidth_mb_sec": 0 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_iscsi_set_options", 00:18:47.301 "params": { 00:18:47.301 "timeout_sec": 30 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_nvme_set_options", 00:18:47.301 "params": { 00:18:47.301 "action_on_timeout": "none", 00:18:47.301 "timeout_us": 0, 00:18:47.301 "timeout_admin_us": 0, 00:18:47.301 "keep_alive_timeout_ms": 10000, 00:18:47.301 "arbitration_burst": 0, 00:18:47.301 "low_priority_weight": 0, 00:18:47.301 "medium_priority_weight": 0, 00:18:47.301 "high_priority_weight": 0, 00:18:47.301 "nvme_adminq_poll_period_us": 10000, 00:18:47.301 "nvme_ioq_poll_period_us": 0, 00:18:47.301 "io_queue_requests": 0, 00:18:47.301 "delay_cmd_submit": true, 00:18:47.301 "transport_retry_count": 4, 00:18:47.301 "bdev_retry_count": 3, 00:18:47.301 "transport_ack_timeout": 0, 00:18:47.301 "ctrlr_loss_timeout_sec": 0, 00:18:47.301 "reconnect_delay_sec": 0, 00:18:47.301 "fast_io_fail_timeout_sec": 0, 00:18:47.301 "disable_auto_failback": false, 00:18:47.301 "generate_uuids": false, 00:18:47.301 "transport_tos": 0, 00:18:47.301 "nvme_error_stat": false, 00:18:47.301 "rdma_srq_size": 0, 00:18:47.301 "io_path_stat": false, 00:18:47.301 "allow_accel_sequence": false, 00:18:47.301 "rdma_max_cq_size": 0, 00:18:47.301 "rdma_cm_event_timeout_ms": 0, 00:18:47.301 "dhchap_digests": [ 00:18:47.301 "sha256", 00:18:47.301 "sha384", 00:18:47.301 "sha512" 00:18:47.301 ], 00:18:47.301 "dhchap_dhgroups": [ 00:18:47.301 "null", 00:18:47.301 "ffdhe2048", 00:18:47.301 "ffdhe3072", 00:18:47.301 "ffdhe4096", 00:18:47.301 "ffdhe6144", 00:18:47.301 "ffdhe8192" 00:18:47.301 ] 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_nvme_set_hotplug", 00:18:47.301 "params": { 00:18:47.301 "period_us": 100000, 00:18:47.301 "enable": false 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_malloc_create", 00:18:47.301 "params": { 00:18:47.301 "name": "malloc0", 00:18:47.301 "num_blocks": 8192, 00:18:47.301 "block_size": 4096, 00:18:47.301 "physical_block_size": 4096, 00:18:47.301 "uuid": "0e401da5-0488-42fd-b824-503a131394c6", 00:18:47.301 "optimal_io_boundary": 0, 00:18:47.301 "md_size": 0, 00:18:47.301 "dif_type": 0, 00:18:47.301 "dif_is_head_of_md": false, 00:18:47.301 "dif_pi_format": 0 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "bdev_wait_for_examine" 00:18:47.301 } 00:18:47.301 ] 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "subsystem": "nbd", 00:18:47.301 "config": [] 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "subsystem": "scheduler", 00:18:47.301 "config": [ 00:18:47.301 { 00:18:47.301 "method": "framework_set_scheduler", 00:18:47.301 "params": { 00:18:47.301 "name": "static" 00:18:47.301 } 00:18:47.301 } 00:18:47.301 ] 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "subsystem": "nvmf", 00:18:47.301 "config": [ 00:18:47.301 { 00:18:47.301 "method": "nvmf_set_config", 00:18:47.301 "params": { 00:18:47.301 "discovery_filter": "match_any", 00:18:47.301 "admin_cmd_passthru": { 00:18:47.301 "identify_ctrlr": false 00:18:47.301 }, 00:18:47.301 "dhchap_digests": [ 00:18:47.301 "sha256", 00:18:47.301 "sha384", 00:18:47.301 "sha512" 00:18:47.301 ], 00:18:47.301 "dhchap_dhgroups": [ 00:18:47.301 "null", 00:18:47.301 "ffdhe2048", 00:18:47.301 "ffdhe3072", 00:18:47.301 "ffdhe4096", 00:18:47.301 "ffdhe6144", 00:18:47.301 "ffdhe8192" 00:18:47.301 ] 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_set_max_subsystems", 00:18:47.301 "params": { 00:18:47.301 "max_subsystems": 1024 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_set_crdt", 00:18:47.301 "params": { 00:18:47.301 "crdt1": 0, 00:18:47.301 "crdt2": 0, 00:18:47.301 "crdt3": 0 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_create_transport", 00:18:47.301 "params": { 00:18:47.301 "trtype": "TCP", 00:18:47.301 "max_queue_depth": 128, 00:18:47.301 "max_io_qpairs_per_ctrlr": 127, 00:18:47.301 "in_capsule_data_size": 4096, 00:18:47.301 "max_io_size": 131072, 00:18:47.301 "io_unit_size": 131072, 00:18:47.301 "max_aq_depth": 128, 00:18:47.301 "num_shared_buffers": 511, 00:18:47.301 "buf_cache_size": 4294967295, 00:18:47.301 "dif_insert_or_strip": false, 00:18:47.301 "zcopy": false, 00:18:47.301 "c2h_success": false, 00:18:47.301 "sock_priority": 0, 00:18:47.301 "abort_timeout_sec": 1, 00:18:47.301 "ack_timeout": 0, 00:18:47.301 "data_wr_pool_size": 0 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_create_subsystem", 00:18:47.301 "params": { 00:18:47.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.301 "allow_any_host": false, 00:18:47.301 "serial_number": "SPDK00000000000001", 00:18:47.301 "model_number": "SPDK bdev Controller", 00:18:47.301 "max_namespaces": 10, 00:18:47.301 "min_cntlid": 1, 00:18:47.301 "max_cntlid": 65519, 00:18:47.301 "ana_reporting": false 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_subsystem_add_host", 00:18:47.301 "params": { 00:18:47.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.301 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.301 "psk": "key0" 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_subsystem_add_ns", 00:18:47.301 "params": { 00:18:47.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.301 "namespace": { 00:18:47.301 "nsid": 1, 00:18:47.301 "bdev_name": "malloc0", 00:18:47.301 "nguid": "0E401DA5048842FDB824503A131394C6", 00:18:47.301 "uuid": "0e401da5-0488-42fd-b824-503a131394c6", 00:18:47.301 "no_auto_visible": false 00:18:47.301 } 00:18:47.301 } 00:18:47.301 }, 00:18:47.301 { 00:18:47.301 "method": "nvmf_subsystem_add_listener", 00:18:47.301 "params": { 00:18:47.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.301 "listen_address": { 00:18:47.301 "trtype": "TCP", 00:18:47.301 "adrfam": "IPv4", 00:18:47.301 "traddr": "10.0.0.2", 00:18:47.301 "trsvcid": "4420" 00:18:47.301 }, 00:18:47.301 "secure_channel": true 00:18:47.301 } 00:18:47.301 } 00:18:47.301 ] 00:18:47.301 } 00:18:47.301 ] 00:18:47.301 }' 00:18:47.301 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:47.561 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:47.561 "subsystems": [ 00:18:47.561 { 00:18:47.561 "subsystem": "keyring", 00:18:47.561 "config": [ 00:18:47.561 { 00:18:47.561 "method": "keyring_file_add_key", 00:18:47.561 "params": { 00:18:47.561 "name": "key0", 00:18:47.561 "path": "/tmp/tmp.f36Fv1oopb" 00:18:47.561 } 00:18:47.561 } 00:18:47.561 ] 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "subsystem": "iobuf", 00:18:47.561 "config": [ 00:18:47.561 { 00:18:47.561 "method": "iobuf_set_options", 00:18:47.561 "params": { 00:18:47.561 "small_pool_count": 8192, 00:18:47.561 "large_pool_count": 1024, 00:18:47.561 "small_bufsize": 8192, 00:18:47.561 "large_bufsize": 135168, 00:18:47.561 "enable_numa": false 00:18:47.561 } 00:18:47.561 } 00:18:47.561 ] 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "subsystem": "sock", 00:18:47.561 "config": [ 00:18:47.561 { 00:18:47.561 "method": "sock_set_default_impl", 00:18:47.561 "params": { 00:18:47.561 "impl_name": "posix" 00:18:47.561 } 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "method": "sock_impl_set_options", 00:18:47.561 "params": { 00:18:47.561 "impl_name": "ssl", 00:18:47.561 "recv_buf_size": 4096, 00:18:47.561 "send_buf_size": 4096, 00:18:47.561 "enable_recv_pipe": true, 00:18:47.561 "enable_quickack": false, 00:18:47.561 "enable_placement_id": 0, 00:18:47.561 "enable_zerocopy_send_server": true, 00:18:47.561 "enable_zerocopy_send_client": false, 00:18:47.561 "zerocopy_threshold": 0, 00:18:47.561 "tls_version": 0, 00:18:47.561 "enable_ktls": false 00:18:47.561 } 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "method": "sock_impl_set_options", 00:18:47.561 "params": { 00:18:47.561 "impl_name": "posix", 00:18:47.561 "recv_buf_size": 2097152, 00:18:47.561 "send_buf_size": 2097152, 00:18:47.561 "enable_recv_pipe": true, 00:18:47.561 "enable_quickack": false, 00:18:47.561 "enable_placement_id": 0, 00:18:47.561 "enable_zerocopy_send_server": true, 00:18:47.561 "enable_zerocopy_send_client": false, 00:18:47.561 "zerocopy_threshold": 0, 00:18:47.561 "tls_version": 0, 00:18:47.561 "enable_ktls": false 00:18:47.561 } 00:18:47.561 } 00:18:47.561 ] 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "subsystem": "vmd", 00:18:47.561 "config": [] 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "subsystem": "accel", 00:18:47.561 "config": [ 00:18:47.561 { 00:18:47.561 "method": "accel_set_options", 00:18:47.561 "params": { 00:18:47.561 "small_cache_size": 128, 00:18:47.561 "large_cache_size": 16, 00:18:47.561 "task_count": 2048, 00:18:47.561 "sequence_count": 2048, 00:18:47.561 "buf_count": 2048 00:18:47.561 } 00:18:47.561 } 00:18:47.561 ] 00:18:47.561 }, 00:18:47.561 { 00:18:47.561 "subsystem": "bdev", 00:18:47.561 "config": [ 00:18:47.561 { 00:18:47.561 "method": "bdev_set_options", 00:18:47.561 "params": { 00:18:47.561 "bdev_io_pool_size": 65535, 00:18:47.561 "bdev_io_cache_size": 256, 00:18:47.561 "bdev_auto_examine": true, 00:18:47.561 "iobuf_small_cache_size": 128, 00:18:47.562 "iobuf_large_cache_size": 16 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_raid_set_options", 00:18:47.562 "params": { 00:18:47.562 "process_window_size_kb": 1024, 00:18:47.562 "process_max_bandwidth_mb_sec": 0 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_iscsi_set_options", 00:18:47.562 "params": { 00:18:47.562 "timeout_sec": 30 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_nvme_set_options", 00:18:47.562 "params": { 00:18:47.562 "action_on_timeout": "none", 00:18:47.562 "timeout_us": 0, 00:18:47.562 "timeout_admin_us": 0, 00:18:47.562 "keep_alive_timeout_ms": 10000, 00:18:47.562 "arbitration_burst": 0, 00:18:47.562 "low_priority_weight": 0, 00:18:47.562 "medium_priority_weight": 0, 00:18:47.562 "high_priority_weight": 0, 00:18:47.562 "nvme_adminq_poll_period_us": 10000, 00:18:47.562 "nvme_ioq_poll_period_us": 0, 00:18:47.562 "io_queue_requests": 512, 00:18:47.562 "delay_cmd_submit": true, 00:18:47.562 "transport_retry_count": 4, 00:18:47.562 "bdev_retry_count": 3, 00:18:47.562 "transport_ack_timeout": 0, 00:18:47.562 "ctrlr_loss_timeout_sec": 0, 00:18:47.562 "reconnect_delay_sec": 0, 00:18:47.562 "fast_io_fail_timeout_sec": 0, 00:18:47.562 "disable_auto_failback": false, 00:18:47.562 "generate_uuids": false, 00:18:47.562 "transport_tos": 0, 00:18:47.562 "nvme_error_stat": false, 00:18:47.562 "rdma_srq_size": 0, 00:18:47.562 "io_path_stat": false, 00:18:47.562 "allow_accel_sequence": false, 00:18:47.562 "rdma_max_cq_size": 0, 00:18:47.562 "rdma_cm_event_timeout_ms": 0, 00:18:47.562 "dhchap_digests": [ 00:18:47.562 "sha256", 00:18:47.562 "sha384", 00:18:47.562 "sha512" 00:18:47.562 ], 00:18:47.562 "dhchap_dhgroups": [ 00:18:47.562 "null", 00:18:47.562 "ffdhe2048", 00:18:47.562 "ffdhe3072", 00:18:47.562 "ffdhe4096", 00:18:47.562 "ffdhe6144", 00:18:47.562 "ffdhe8192" 00:18:47.562 ] 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_nvme_attach_controller", 00:18:47.562 "params": { 00:18:47.562 "name": "TLSTEST", 00:18:47.562 "trtype": "TCP", 00:18:47.562 "adrfam": "IPv4", 00:18:47.562 "traddr": "10.0.0.2", 00:18:47.562 "trsvcid": "4420", 00:18:47.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.562 "prchk_reftag": false, 00:18:47.562 "prchk_guard": false, 00:18:47.562 "ctrlr_loss_timeout_sec": 0, 00:18:47.562 "reconnect_delay_sec": 0, 00:18:47.562 "fast_io_fail_timeout_sec": 0, 00:18:47.562 "psk": "key0", 00:18:47.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.562 "hdgst": false, 00:18:47.562 "ddgst": false, 00:18:47.562 "multipath": "multipath" 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_nvme_set_hotplug", 00:18:47.562 "params": { 00:18:47.562 "period_us": 100000, 00:18:47.562 "enable": false 00:18:47.562 } 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "method": "bdev_wait_for_examine" 00:18:47.562 } 00:18:47.562 ] 00:18:47.562 }, 00:18:47.562 { 00:18:47.562 "subsystem": "nbd", 00:18:47.562 "config": [] 00:18:47.562 } 00:18:47.562 ] 00:18:47.562 }' 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2515372 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515372 ']' 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515372 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515372 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515372' 00:18:47.562 killing process with pid 2515372 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515372 00:18:47.562 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.562 00:18:47.562 Latency(us) 00:18:47.562 [2024-11-20T16:13:05.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.562 [2024-11-20T16:13:05.605Z] =================================================================================================================== 00:18:47.562 [2024-11-20T16:13:05.605Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.562 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515372 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515117 ']' 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515117' 00:18:47.822 killing process with pid 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515117 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.822 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:47.822 "subsystems": [ 00:18:47.822 { 00:18:47.822 "subsystem": "keyring", 00:18:47.822 "config": [ 00:18:47.822 { 00:18:47.822 "method": "keyring_file_add_key", 00:18:47.822 "params": { 00:18:47.822 "name": "key0", 00:18:47.822 "path": "/tmp/tmp.f36Fv1oopb" 00:18:47.822 } 00:18:47.822 } 00:18:47.822 ] 00:18:47.822 }, 00:18:47.822 { 00:18:47.822 "subsystem": "iobuf", 00:18:47.822 "config": [ 00:18:47.822 { 00:18:47.822 "method": "iobuf_set_options", 00:18:47.823 "params": { 00:18:47.823 "small_pool_count": 8192, 00:18:47.823 "large_pool_count": 1024, 00:18:47.823 "small_bufsize": 8192, 00:18:47.823 "large_bufsize": 135168, 00:18:47.823 "enable_numa": false 00:18:47.823 } 00:18:47.823 } 00:18:47.823 ] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "sock", 00:18:47.823 "config": [ 00:18:47.823 { 00:18:47.823 "method": "sock_set_default_impl", 00:18:47.823 "params": { 00:18:47.823 "impl_name": "posix" 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "sock_impl_set_options", 00:18:47.823 "params": { 00:18:47.823 "impl_name": "ssl", 00:18:47.823 "recv_buf_size": 4096, 00:18:47.823 "send_buf_size": 4096, 00:18:47.823 "enable_recv_pipe": true, 00:18:47.823 "enable_quickack": false, 00:18:47.823 "enable_placement_id": 0, 00:18:47.823 "enable_zerocopy_send_server": true, 00:18:47.823 "enable_zerocopy_send_client": false, 00:18:47.823 "zerocopy_threshold": 0, 00:18:47.823 "tls_version": 0, 00:18:47.823 "enable_ktls": false 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "sock_impl_set_options", 00:18:47.823 "params": { 00:18:47.823 "impl_name": "posix", 00:18:47.823 "recv_buf_size": 2097152, 00:18:47.823 "send_buf_size": 2097152, 00:18:47.823 "enable_recv_pipe": true, 00:18:47.823 "enable_quickack": false, 00:18:47.823 "enable_placement_id": 0, 00:18:47.823 "enable_zerocopy_send_server": true, 00:18:47.823 "enable_zerocopy_send_client": false, 00:18:47.823 "zerocopy_threshold": 0, 00:18:47.823 "tls_version": 0, 00:18:47.823 "enable_ktls": false 00:18:47.823 } 00:18:47.823 } 00:18:47.823 ] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "vmd", 00:18:47.823 "config": [] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "accel", 00:18:47.823 "config": [ 00:18:47.823 { 00:18:47.823 "method": "accel_set_options", 00:18:47.823 "params": { 00:18:47.823 "small_cache_size": 128, 00:18:47.823 "large_cache_size": 16, 00:18:47.823 "task_count": 2048, 00:18:47.823 "sequence_count": 2048, 00:18:47.823 "buf_count": 2048 00:18:47.823 } 00:18:47.823 } 00:18:47.823 ] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "bdev", 00:18:47.823 "config": [ 00:18:47.823 { 00:18:47.823 "method": "bdev_set_options", 00:18:47.823 "params": { 00:18:47.823 "bdev_io_pool_size": 65535, 00:18:47.823 "bdev_io_cache_size": 256, 00:18:47.823 "bdev_auto_examine": true, 00:18:47.823 "iobuf_small_cache_size": 128, 00:18:47.823 "iobuf_large_cache_size": 16 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_raid_set_options", 00:18:47.823 "params": { 00:18:47.823 "process_window_size_kb": 1024, 00:18:47.823 "process_max_bandwidth_mb_sec": 0 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_iscsi_set_options", 00:18:47.823 "params": { 00:18:47.823 "timeout_sec": 30 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_nvme_set_options", 00:18:47.823 "params": { 00:18:47.823 "action_on_timeout": "none", 00:18:47.823 "timeout_us": 0, 00:18:47.823 "timeout_admin_us": 0, 00:18:47.823 "keep_alive_timeout_ms": 10000, 00:18:47.823 "arbitration_burst": 0, 00:18:47.823 "low_priority_weight": 0, 00:18:47.823 "medium_priority_weight": 0, 00:18:47.823 "high_priority_weight": 0, 00:18:47.823 "nvme_adminq_poll_period_us": 10000, 00:18:47.823 "nvme_ioq_poll_period_us": 0, 00:18:47.823 "io_queue_requests": 0, 00:18:47.823 "delay_cmd_submit": true, 00:18:47.823 "transport_retry_count": 4, 00:18:47.823 "bdev_retry_count": 3, 00:18:47.823 "transport_ack_timeout": 0, 00:18:47.823 "ctrlr_loss_timeout_sec": 0, 00:18:47.823 "reconnect_delay_sec": 0, 00:18:47.823 "fast_io_fail_timeout_sec": 0, 00:18:47.823 "disable_auto_failback": false, 00:18:47.823 "generate_uuids": false, 00:18:47.823 "transport_tos": 0, 00:18:47.823 "nvme_error_stat": false, 00:18:47.823 "rdma_srq_size": 0, 00:18:47.823 "io_path_stat": false, 00:18:47.823 "allow_accel_sequence": false, 00:18:47.823 "rdma_max_cq_size": 0, 00:18:47.823 "rdma_cm_event_timeout_ms": 0, 00:18:47.823 "dhchap_digests": [ 00:18:47.823 "sha256", 00:18:47.823 "sha384", 00:18:47.823 "sha512" 00:18:47.823 ], 00:18:47.823 "dhchap_dhgroups": [ 00:18:47.823 "null", 00:18:47.823 "ffdhe2048", 00:18:47.823 "ffdhe3072", 00:18:47.823 "ffdhe4096", 00:18:47.823 "ffdhe6144", 00:18:47.823 "ffdhe8192" 00:18:47.823 ] 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_nvme_set_hotplug", 00:18:47.823 "params": { 00:18:47.823 "period_us": 100000, 00:18:47.823 "enable": false 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_malloc_create", 00:18:47.823 "params": { 00:18:47.823 "name": "malloc0", 00:18:47.823 "num_blocks": 8192, 00:18:47.823 "block_size": 4096, 00:18:47.823 "physical_block_size": 4096, 00:18:47.823 "uuid": "0e401da5-0488-42fd-b824-503a131394c6", 00:18:47.823 "optimal_io_boundary": 0, 00:18:47.823 "md_size": 0, 00:18:47.823 "dif_type": 0, 00:18:47.823 "dif_is_head_of_md": false, 00:18:47.823 "dif_pi_format": 0 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "bdev_wait_for_examine" 00:18:47.823 } 00:18:47.823 ] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "nbd", 00:18:47.823 "config": [] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "scheduler", 00:18:47.823 "config": [ 00:18:47.823 { 00:18:47.823 "method": "framework_set_scheduler", 00:18:47.823 "params": { 00:18:47.823 "name": "static" 00:18:47.823 } 00:18:47.823 } 00:18:47.823 ] 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "subsystem": "nvmf", 00:18:47.823 "config": [ 00:18:47.823 { 00:18:47.823 "method": "nvmf_set_config", 00:18:47.823 "params": { 00:18:47.823 "discovery_filter": "match_any", 00:18:47.823 "admin_cmd_passthru": { 00:18:47.823 "identify_ctrlr": false 00:18:47.823 }, 00:18:47.823 "dhchap_digests": [ 00:18:47.823 "sha256", 00:18:47.823 "sha384", 00:18:47.823 "sha512" 00:18:47.823 ], 00:18:47.823 "dhchap_dhgroups": [ 00:18:47.823 "null", 00:18:47.823 "ffdhe2048", 00:18:47.823 "ffdhe3072", 00:18:47.823 "ffdhe4096", 00:18:47.823 "ffdhe6144", 00:18:47.823 "ffdhe8192" 00:18:47.823 ] 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "nvmf_set_max_subsystems", 00:18:47.823 "params": { 00:18:47.823 "max_subsystems": 1024 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "nvmf_set_crdt", 00:18:47.823 "params": { 00:18:47.823 "crdt1": 0, 00:18:47.823 "crdt2": 0, 00:18:47.823 "crdt3": 0 00:18:47.823 } 00:18:47.823 }, 00:18:47.823 { 00:18:47.823 "method": "nvmf_create_transport", 00:18:47.823 "params": { 00:18:47.823 "trtype": "TCP", 00:18:47.823 "max_queue_depth": 128, 00:18:47.823 "max_io_qpairs_per_ctrlr": 127, 00:18:47.823 "in_capsule_data_size": 4096, 00:18:47.823 "max_io_size": 131072, 00:18:47.823 "io_unit_size": 131072, 00:18:47.823 "max_aq_depth": 128, 00:18:47.823 "num_shared_buffers": 511, 00:18:47.823 "buf_cache_size": 4294967295, 00:18:47.823 "dif_insert_or_strip": false, 00:18:47.823 "zcopy": false, 00:18:47.823 "c2h_success": false, 00:18:47.823 "sock_priority": 0, 00:18:47.823 "abort_timeout_sec": 1, 00:18:47.824 "ack_timeout": 0, 00:18:47.824 "data_wr_pool_size": 0 00:18:47.824 } 00:18:47.824 }, 00:18:47.824 { 00:18:47.824 "method": "nvmf_create_subsystem", 00:18:47.824 "params": { 00:18:47.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.824 "allow_any_host": false, 00:18:47.824 "serial_number": "SPDK00000000000001", 00:18:47.824 "model_number": "SPDK bdev Controller", 00:18:47.824 "max_namespaces": 10, 00:18:47.824 "min_cntlid": 1, 00:18:47.824 "max_cntlid": 65519, 00:18:47.824 "ana_reporting": false 00:18:47.824 } 00:18:47.824 }, 00:18:47.824 { 00:18:47.824 "method": "nvmf_subsystem_add_host", 00:18:47.824 "params": { 00:18:47.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.824 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.824 "psk": "key0" 00:18:47.824 } 00:18:47.824 }, 00:18:47.824 { 00:18:47.824 "method": "nvmf_subsystem_add_ns", 00:18:47.824 "params": { 00:18:47.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.824 "namespace": { 00:18:47.824 "nsid": 1, 00:18:47.824 "bdev_name": "malloc0", 00:18:47.824 "nguid": "0E401DA5048842FDB824503A131394C6", 00:18:47.824 "uuid": "0e401da5-0488-42fd-b824-503a131394c6", 00:18:47.824 "no_auto_visible": false 00:18:47.824 } 00:18:47.824 } 00:18:47.824 }, 00:18:47.824 { 00:18:47.824 "method": "nvmf_subsystem_add_listener", 00:18:47.824 "params": { 00:18:47.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.824 "listen_address": { 00:18:47.824 "trtype": "TCP", 00:18:47.824 "adrfam": "IPv4", 00:18:47.824 "traddr": "10.0.0.2", 00:18:47.824 "trsvcid": "4420" 00:18:47.824 }, 00:18:47.824 "secure_channel": true 00:18:47.824 } 00:18:47.824 } 00:18:47.824 ] 00:18:47.824 } 00:18:47.824 ] 00:18:47.824 }' 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2515642 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2515642 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515642 ']' 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.824 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.083 [2024-11-20 17:13:05.881905] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:48.083 [2024-11-20 17:13:05.881949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.083 [2024-11-20 17:13:05.958583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.083 [2024-11-20 17:13:05.998654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.083 [2024-11-20 17:13:05.998691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.083 [2024-11-20 17:13:05.998698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.083 [2024-11-20 17:13:05.998704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.083 [2024-11-20 17:13:05.998709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.083 [2024-11-20 17:13:05.999304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.343 [2024-11-20 17:13:06.213395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.343 [2024-11-20 17:13:06.245415] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.343 [2024-11-20 17:13:06.245612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2515871 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2515871 /var/tmp/bdevperf.sock 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515871 ']' 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.912 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:48.912 "subsystems": [ 00:18:48.912 { 00:18:48.912 "subsystem": "keyring", 00:18:48.912 "config": [ 00:18:48.912 { 00:18:48.912 "method": "keyring_file_add_key", 00:18:48.912 "params": { 00:18:48.912 "name": "key0", 00:18:48.912 "path": "/tmp/tmp.f36Fv1oopb" 00:18:48.912 } 00:18:48.912 } 00:18:48.912 ] 00:18:48.912 }, 00:18:48.912 { 00:18:48.912 "subsystem": "iobuf", 00:18:48.912 "config": [ 00:18:48.912 { 00:18:48.912 "method": "iobuf_set_options", 00:18:48.912 "params": { 00:18:48.912 "small_pool_count": 8192, 00:18:48.912 "large_pool_count": 1024, 00:18:48.912 "small_bufsize": 8192, 00:18:48.913 "large_bufsize": 135168, 00:18:48.913 "enable_numa": false 00:18:48.913 } 00:18:48.913 } 00:18:48.913 ] 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "subsystem": "sock", 00:18:48.913 "config": [ 00:18:48.913 { 00:18:48.913 "method": "sock_set_default_impl", 00:18:48.913 "params": { 00:18:48.913 "impl_name": "posix" 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "sock_impl_set_options", 00:18:48.913 "params": { 00:18:48.913 "impl_name": "ssl", 00:18:48.913 "recv_buf_size": 4096, 00:18:48.913 "send_buf_size": 4096, 00:18:48.913 "enable_recv_pipe": true, 00:18:48.913 "enable_quickack": false, 00:18:48.913 "enable_placement_id": 0, 00:18:48.913 "enable_zerocopy_send_server": true, 00:18:48.913 "enable_zerocopy_send_client": false, 00:18:48.913 "zerocopy_threshold": 0, 00:18:48.913 "tls_version": 0, 00:18:48.913 "enable_ktls": false 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "sock_impl_set_options", 00:18:48.913 "params": { 00:18:48.913 "impl_name": "posix", 00:18:48.913 "recv_buf_size": 2097152, 00:18:48.913 "send_buf_size": 2097152, 00:18:48.913 "enable_recv_pipe": true, 00:18:48.913 "enable_quickack": false, 00:18:48.913 "enable_placement_id": 0, 00:18:48.913 "enable_zerocopy_send_server": true, 00:18:48.913 "enable_zerocopy_send_client": false, 00:18:48.913 "zerocopy_threshold": 0, 00:18:48.913 "tls_version": 0, 00:18:48.913 "enable_ktls": false 00:18:48.913 } 00:18:48.913 } 00:18:48.913 ] 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "subsystem": "vmd", 00:18:48.913 "config": [] 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "subsystem": "accel", 00:18:48.913 "config": [ 00:18:48.913 { 00:18:48.913 "method": "accel_set_options", 00:18:48.913 "params": { 00:18:48.913 "small_cache_size": 128, 00:18:48.913 "large_cache_size": 16, 00:18:48.913 "task_count": 2048, 00:18:48.913 "sequence_count": 2048, 00:18:48.913 "buf_count": 2048 00:18:48.913 } 00:18:48.913 } 00:18:48.913 ] 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "subsystem": "bdev", 00:18:48.913 "config": [ 00:18:48.913 { 00:18:48.913 "method": "bdev_set_options", 00:18:48.913 "params": { 00:18:48.913 "bdev_io_pool_size": 65535, 00:18:48.913 "bdev_io_cache_size": 256, 00:18:48.913 "bdev_auto_examine": true, 00:18:48.913 "iobuf_small_cache_size": 128, 00:18:48.913 "iobuf_large_cache_size": 16 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_raid_set_options", 00:18:48.913 "params": { 00:18:48.913 "process_window_size_kb": 1024, 00:18:48.913 "process_max_bandwidth_mb_sec": 0 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_iscsi_set_options", 00:18:48.913 "params": { 00:18:48.913 "timeout_sec": 30 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_nvme_set_options", 00:18:48.913 "params": { 00:18:48.913 "action_on_timeout": "none", 00:18:48.913 "timeout_us": 0, 00:18:48.913 "timeout_admin_us": 0, 00:18:48.913 "keep_alive_timeout_ms": 10000, 00:18:48.913 "arbitration_burst": 0, 00:18:48.913 "low_priority_weight": 0, 00:18:48.913 "medium_priority_weight": 0, 00:18:48.913 "high_priority_weight": 0, 00:18:48.913 "nvme_adminq_poll_period_us": 10000, 00:18:48.913 "nvme_ioq_poll_period_us": 0, 00:18:48.913 "io_queue_requests": 512, 00:18:48.913 "delay_cmd_submit": true, 00:18:48.913 "transport_retry_count": 4, 00:18:48.913 "bdev_retry_count": 3, 00:18:48.913 "transport_ack_timeout": 0, 00:18:48.913 "ctrlr_loss_timeout_sec": 0, 00:18:48.913 "reconnect_delay_sec": 0, 00:18:48.913 "fast_io_fail_timeout_sec": 0, 00:18:48.913 "disable_auto_failback": false, 00:18:48.913 "generate_uuids": false, 00:18:48.913 "transport_tos": 0, 00:18:48.913 "nvme_error_stat": false, 00:18:48.913 "rdma_srq_size": 0, 00:18:48.913 "io_path_stat": false, 00:18:48.913 "allow_accel_sequence": false, 00:18:48.913 "rdma_max_cq_size": 0, 00:18:48.913 "rdma_cm_event_timeout_ms": 0, 00:18:48.913 "dhchap_digests": [ 00:18:48.913 "sha256", 00:18:48.913 "sha384", 00:18:48.913 "sha512" 00:18:48.913 ], 00:18:48.913 "dhchap_dhgroups": [ 00:18:48.913 "null", 00:18:48.913 "ffdhe2048", 00:18:48.913 "ffdhe3072", 00:18:48.913 "ffdhe4096", 00:18:48.913 "ffdhe6144", 00:18:48.913 "ffdhe8192" 00:18:48.913 ] 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_nvme_attach_controller", 00:18:48.913 "params": { 00:18:48.913 "name": "TLSTEST", 00:18:48.913 "trtype": "TCP", 00:18:48.913 "adrfam": "IPv4", 00:18:48.913 "traddr": "10.0.0.2", 00:18:48.913 "trsvcid": "4420", 00:18:48.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.913 "prchk_reftag": false, 00:18:48.913 "prchk_guard": false, 00:18:48.913 "ctrlr_loss_timeout_sec": 0, 00:18:48.913 "reconnect_delay_sec": 0, 00:18:48.913 "fast_io_fail_timeout_sec": 0, 00:18:48.913 "psk": "key0", 00:18:48.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.913 "hdgst": false, 00:18:48.913 "ddgst": false, 00:18:48.913 "multipath": "multipath" 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_nvme_set_hotplug", 00:18:48.913 "params": { 00:18:48.913 "period_us": 100000, 00:18:48.913 "enable": false 00:18:48.913 } 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "method": "bdev_wait_for_examine" 00:18:48.913 } 00:18:48.913 ] 00:18:48.913 }, 00:18:48.913 { 00:18:48.913 "subsystem": "nbd", 00:18:48.913 "config": [] 00:18:48.913 } 00:18:48.913 ] 00:18:48.913 }' 00:18:48.913 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.913 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.913 [2024-11-20 17:13:06.785366] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:18:48.913 [2024-11-20 17:13:06.785411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515871 ] 00:18:48.913 [2024-11-20 17:13:06.860007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.913 [2024-11-20 17:13:06.901306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.172 [2024-11-20 17:13:07.053034] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.740 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.740 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.740 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.740 Running I/O for 10 seconds... 00:18:52.053 5406.00 IOPS, 21.12 MiB/s [2024-11-20T16:13:11.087Z] 5489.00 IOPS, 21.44 MiB/s [2024-11-20T16:13:12.028Z] 5192.33 IOPS, 20.28 MiB/s [2024-11-20T16:13:12.964Z] 5160.25 IOPS, 20.16 MiB/s [2024-11-20T16:13:13.900Z] 5130.20 IOPS, 20.04 MiB/s [2024-11-20T16:13:14.837Z] 5045.50 IOPS, 19.71 MiB/s [2024-11-20T16:13:15.775Z] 4953.00 IOPS, 19.35 MiB/s [2024-11-20T16:13:17.152Z] 4891.62 IOPS, 19.11 MiB/s [2024-11-20T16:13:18.089Z] 4843.22 IOPS, 18.92 MiB/s [2024-11-20T16:13:18.089Z] 4798.00 IOPS, 18.74 MiB/s 00:19:00.046 Latency(us) 00:19:00.046 [2024-11-20T16:13:18.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.046 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.046 Verification LBA range: start 0x0 length 0x2000 00:19:00.046 TLSTESTn1 : 10.02 4800.53 18.75 0.00 0.00 26620.67 6116.69 46187.28 00:19:00.046 [2024-11-20T16:13:18.089Z] =================================================================================================================== 00:19:00.046 [2024-11-20T16:13:18.089Z] Total : 4800.53 18.75 0.00 0.00 26620.67 6116.69 46187.28 00:19:00.046 { 00:19:00.046 "results": [ 00:19:00.046 { 00:19:00.046 "job": "TLSTESTn1", 00:19:00.046 "core_mask": "0x4", 00:19:00.046 "workload": "verify", 00:19:00.046 "status": "finished", 00:19:00.046 "verify_range": { 00:19:00.046 "start": 0, 00:19:00.046 "length": 8192 00:19:00.046 }, 00:19:00.046 "queue_depth": 128, 00:19:00.046 "io_size": 4096, 00:19:00.046 "runtime": 10.021385, 00:19:00.046 "iops": 4800.534057917144, 00:19:00.046 "mibps": 18.752086163738845, 00:19:00.046 "io_failed": 0, 00:19:00.046 "io_timeout": 0, 00:19:00.046 "avg_latency_us": 26620.67184897473, 00:19:00.046 "min_latency_us": 6116.693333333334, 00:19:00.046 "max_latency_us": 46187.276190476194 00:19:00.046 } 00:19:00.046 ], 00:19:00.046 "core_count": 1 00:19:00.046 } 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2515871 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515871 ']' 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515871 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515871 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515871' 00:19:00.046 killing process with pid 2515871 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515871 00:19:00.046 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.046 00:19:00.046 Latency(us) 00:19:00.046 [2024-11-20T16:13:18.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.046 [2024-11-20T16:13:18.089Z] =================================================================================================================== 00:19:00.046 [2024-11-20T16:13:18.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515871 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2515642 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515642 ']' 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515642 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.046 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515642 00:19:00.046 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.046 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.046 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515642' 00:19:00.046 killing process with pid 2515642 00:19:00.046 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515642 00:19:00.046 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515642 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2517721 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2517721 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2517721 ']' 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.306 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.306 [2024-11-20 17:13:18.253079] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:00.306 [2024-11-20 17:13:18.253124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.306 [2024-11-20 17:13:18.331162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.566 [2024-11-20 17:13:18.372135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.566 [2024-11-20 17:13:18.372168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.566 [2024-11-20 17:13:18.372174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.566 [2024-11-20 17:13:18.372180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.566 [2024-11-20 17:13:18.372185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.566 [2024-11-20 17:13:18.372760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.f36Fv1oopb 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f36Fv1oopb 00:19:00.566 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:00.825 [2024-11-20 17:13:18.673195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.825 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.083 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:01.083 [2024-11-20 17:13:19.086433] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.083 [2024-11-20 17:13:19.086650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.083 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:01.341 malloc0 00:19:01.341 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:01.600 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:19:01.858 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2517979 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2517979 /var/tmp/bdevperf.sock 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2517979 ']' 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.117 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.117 [2024-11-20 17:13:19.934523] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:02.117 [2024-11-20 17:13:19.934574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517979 ] 00:19:02.117 [2024-11-20 17:13:20.007681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.117 [2024-11-20 17:13:20.054801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.117 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.117 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.117 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:19:02.375 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.634 [2024-11-20 17:13:20.541049] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.634 nvme0n1 00:19:02.634 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.893 Running I/O for 1 seconds... 00:19:03.830 5512.00 IOPS, 21.53 MiB/s 00:19:03.830 Latency(us) 00:19:03.830 [2024-11-20T16:13:21.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.830 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.830 Verification LBA range: start 0x0 length 0x2000 00:19:03.830 nvme0n1 : 1.01 5571.62 21.76 0.00 0.00 22821.66 4899.60 20846.69 00:19:03.830 [2024-11-20T16:13:21.873Z] =================================================================================================================== 00:19:03.830 [2024-11-20T16:13:21.873Z] Total : 5571.62 21.76 0.00 0.00 22821.66 4899.60 20846.69 00:19:03.830 { 00:19:03.830 "results": [ 00:19:03.830 { 00:19:03.830 "job": "nvme0n1", 00:19:03.830 "core_mask": "0x2", 00:19:03.830 "workload": "verify", 00:19:03.830 "status": "finished", 00:19:03.830 "verify_range": { 00:19:03.830 "start": 0, 00:19:03.830 "length": 8192 00:19:03.830 }, 00:19:03.830 "queue_depth": 128, 00:19:03.830 "io_size": 4096, 00:19:03.830 "runtime": 1.012453, 00:19:03.830 "iops": 5571.616657760904, 00:19:03.830 "mibps": 21.76412756937853, 00:19:03.830 "io_failed": 0, 00:19:03.830 "io_timeout": 0, 00:19:03.830 "avg_latency_us": 22821.66267176539, 00:19:03.830 "min_latency_us": 4899.596190476191, 00:19:03.830 "max_latency_us": 20846.689523809524 00:19:03.830 } 00:19:03.830 ], 00:19:03.830 "core_count": 1 00:19:03.830 } 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2517979 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2517979 ']' 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2517979 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2517979 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2517979' 00:19:03.830 killing process with pid 2517979 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2517979 00:19:03.830 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.830 00:19:03.830 Latency(us) 00:19:03.830 [2024-11-20T16:13:21.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.830 [2024-11-20T16:13:21.873Z] =================================================================================================================== 00:19:03.830 [2024-11-20T16:13:21.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.830 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2517979 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2517721 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2517721 ']' 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2517721 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.090 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2517721 00:19:04.090 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.090 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.090 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2517721' 00:19:04.090 killing process with pid 2517721 00:19:04.090 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2517721 00:19:04.090 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2517721 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2518443 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2518443 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2518443 ']' 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.349 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.349 [2024-11-20 17:13:22.257309] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:04.349 [2024-11-20 17:13:22.257360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.349 [2024-11-20 17:13:22.334135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.349 [2024-11-20 17:13:22.369852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.349 [2024-11-20 17:13:22.369887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.349 [2024-11-20 17:13:22.369895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.349 [2024-11-20 17:13:22.369901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.349 [2024-11-20 17:13:22.369907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.349 [2024-11-20 17:13:22.370504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.608 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.608 [2024-11-20 17:13:22.514549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.608 malloc0 00:19:04.609 [2024-11-20 17:13:22.542690] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.609 [2024-11-20 17:13:22.542900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2518466 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2518466 /var/tmp/bdevperf.sock 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2518466 ']' 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.609 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.609 [2024-11-20 17:13:22.616018] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:04.609 [2024-11-20 17:13:22.616056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518466 ] 00:19:04.867 [2024-11-20 17:13:22.687687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.867 [2024-11-20 17:13:22.727732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.867 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.867 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.867 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f36Fv1oopb 00:19:05.126 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:05.385 [2024-11-20 17:13:23.172286] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.385 nvme0n1 00:19:05.385 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.385 Running I/O for 1 seconds... 00:19:06.580 5439.00 IOPS, 21.25 MiB/s 00:19:06.580 Latency(us) 00:19:06.580 [2024-11-20T16:13:24.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.580 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:06.580 Verification LBA range: start 0x0 length 0x2000 00:19:06.581 nvme0n1 : 1.01 5496.03 21.47 0.00 0.00 23133.57 4899.60 23093.64 00:19:06.581 [2024-11-20T16:13:24.624Z] =================================================================================================================== 00:19:06.581 [2024-11-20T16:13:24.624Z] Total : 5496.03 21.47 0.00 0.00 23133.57 4899.60 23093.64 00:19:06.581 { 00:19:06.581 "results": [ 00:19:06.581 { 00:19:06.581 "job": "nvme0n1", 00:19:06.581 "core_mask": "0x2", 00:19:06.581 "workload": "verify", 00:19:06.581 "status": "finished", 00:19:06.581 "verify_range": { 00:19:06.581 "start": 0, 00:19:06.581 "length": 8192 00:19:06.581 }, 00:19:06.581 "queue_depth": 128, 00:19:06.581 "io_size": 4096, 00:19:06.581 "runtime": 1.012913, 00:19:06.581 "iops": 5496.029767610841, 00:19:06.581 "mibps": 21.46886627972985, 00:19:06.581 "io_failed": 0, 00:19:06.581 "io_timeout": 0, 00:19:06.581 "avg_latency_us": 23133.56557263466, 00:19:06.581 "min_latency_us": 4899.596190476191, 00:19:06.581 "max_latency_us": 23093.638095238097 00:19:06.581 } 00:19:06.581 ], 00:19:06.581 "core_count": 1 00:19:06.581 } 00:19:06.581 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:06.581 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.581 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.581 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.581 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:06.581 "subsystems": [ 00:19:06.581 { 00:19:06.581 "subsystem": "keyring", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "keyring_file_add_key", 00:19:06.581 "params": { 00:19:06.581 "name": "key0", 00:19:06.581 "path": "/tmp/tmp.f36Fv1oopb" 00:19:06.581 } 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "iobuf", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "iobuf_set_options", 00:19:06.581 "params": { 00:19:06.581 "small_pool_count": 8192, 00:19:06.581 "large_pool_count": 1024, 00:19:06.581 "small_bufsize": 8192, 00:19:06.581 "large_bufsize": 135168, 00:19:06.581 "enable_numa": false 00:19:06.581 } 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "sock", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "sock_set_default_impl", 00:19:06.581 "params": { 00:19:06.581 "impl_name": "posix" 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "sock_impl_set_options", 00:19:06.581 "params": { 00:19:06.581 "impl_name": "ssl", 00:19:06.581 "recv_buf_size": 4096, 00:19:06.581 "send_buf_size": 4096, 00:19:06.581 "enable_recv_pipe": true, 00:19:06.581 "enable_quickack": false, 00:19:06.581 "enable_placement_id": 0, 00:19:06.581 "enable_zerocopy_send_server": true, 00:19:06.581 "enable_zerocopy_send_client": false, 00:19:06.581 "zerocopy_threshold": 0, 00:19:06.581 "tls_version": 0, 00:19:06.581 "enable_ktls": false 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "sock_impl_set_options", 00:19:06.581 "params": { 00:19:06.581 "impl_name": "posix", 00:19:06.581 "recv_buf_size": 2097152, 00:19:06.581 "send_buf_size": 2097152, 00:19:06.581 "enable_recv_pipe": true, 00:19:06.581 "enable_quickack": false, 00:19:06.581 "enable_placement_id": 0, 00:19:06.581 "enable_zerocopy_send_server": true, 00:19:06.581 "enable_zerocopy_send_client": false, 00:19:06.581 "zerocopy_threshold": 0, 00:19:06.581 "tls_version": 0, 00:19:06.581 "enable_ktls": false 00:19:06.581 } 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "vmd", 00:19:06.581 "config": [] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "accel", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "accel_set_options", 00:19:06.581 "params": { 00:19:06.581 "small_cache_size": 128, 00:19:06.581 "large_cache_size": 16, 00:19:06.581 "task_count": 2048, 00:19:06.581 "sequence_count": 2048, 00:19:06.581 "buf_count": 2048 00:19:06.581 } 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "bdev", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "bdev_set_options", 00:19:06.581 "params": { 00:19:06.581 "bdev_io_pool_size": 65535, 00:19:06.581 "bdev_io_cache_size": 256, 00:19:06.581 "bdev_auto_examine": true, 00:19:06.581 "iobuf_small_cache_size": 128, 00:19:06.581 "iobuf_large_cache_size": 16 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_raid_set_options", 00:19:06.581 "params": { 00:19:06.581 "process_window_size_kb": 1024, 00:19:06.581 "process_max_bandwidth_mb_sec": 0 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_iscsi_set_options", 00:19:06.581 "params": { 00:19:06.581 "timeout_sec": 30 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_nvme_set_options", 00:19:06.581 "params": { 00:19:06.581 "action_on_timeout": "none", 00:19:06.581 "timeout_us": 0, 00:19:06.581 "timeout_admin_us": 0, 00:19:06.581 "keep_alive_timeout_ms": 10000, 00:19:06.581 "arbitration_burst": 0, 00:19:06.581 "low_priority_weight": 0, 00:19:06.581 "medium_priority_weight": 0, 00:19:06.581 "high_priority_weight": 0, 00:19:06.581 "nvme_adminq_poll_period_us": 10000, 00:19:06.581 "nvme_ioq_poll_period_us": 0, 00:19:06.581 "io_queue_requests": 0, 00:19:06.581 "delay_cmd_submit": true, 00:19:06.581 "transport_retry_count": 4, 00:19:06.581 "bdev_retry_count": 3, 00:19:06.581 "transport_ack_timeout": 0, 00:19:06.581 "ctrlr_loss_timeout_sec": 0, 00:19:06.581 "reconnect_delay_sec": 0, 00:19:06.581 "fast_io_fail_timeout_sec": 0, 00:19:06.581 "disable_auto_failback": false, 00:19:06.581 "generate_uuids": false, 00:19:06.581 "transport_tos": 0, 00:19:06.581 "nvme_error_stat": false, 00:19:06.581 "rdma_srq_size": 0, 00:19:06.581 "io_path_stat": false, 00:19:06.581 "allow_accel_sequence": false, 00:19:06.581 "rdma_max_cq_size": 0, 00:19:06.581 "rdma_cm_event_timeout_ms": 0, 00:19:06.581 "dhchap_digests": [ 00:19:06.581 "sha256", 00:19:06.581 "sha384", 00:19:06.581 "sha512" 00:19:06.581 ], 00:19:06.581 "dhchap_dhgroups": [ 00:19:06.581 "null", 00:19:06.581 "ffdhe2048", 00:19:06.581 "ffdhe3072", 00:19:06.581 "ffdhe4096", 00:19:06.581 "ffdhe6144", 00:19:06.581 "ffdhe8192" 00:19:06.581 ] 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_nvme_set_hotplug", 00:19:06.581 "params": { 00:19:06.581 "period_us": 100000, 00:19:06.581 "enable": false 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_malloc_create", 00:19:06.581 "params": { 00:19:06.581 "name": "malloc0", 00:19:06.581 "num_blocks": 8192, 00:19:06.581 "block_size": 4096, 00:19:06.581 "physical_block_size": 4096, 00:19:06.581 "uuid": "a4285843-25ba-49fe-be7b-549d707d1333", 00:19:06.581 "optimal_io_boundary": 0, 00:19:06.581 "md_size": 0, 00:19:06.581 "dif_type": 0, 00:19:06.581 "dif_is_head_of_md": false, 00:19:06.581 "dif_pi_format": 0 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "bdev_wait_for_examine" 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "nbd", 00:19:06.581 "config": [] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "scheduler", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "framework_set_scheduler", 00:19:06.581 "params": { 00:19:06.581 "name": "static" 00:19:06.581 } 00:19:06.581 } 00:19:06.581 ] 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "subsystem": "nvmf", 00:19:06.581 "config": [ 00:19:06.581 { 00:19:06.581 "method": "nvmf_set_config", 00:19:06.581 "params": { 00:19:06.581 "discovery_filter": "match_any", 00:19:06.581 "admin_cmd_passthru": { 00:19:06.581 "identify_ctrlr": false 00:19:06.581 }, 00:19:06.581 "dhchap_digests": [ 00:19:06.581 "sha256", 00:19:06.581 "sha384", 00:19:06.581 "sha512" 00:19:06.581 ], 00:19:06.581 "dhchap_dhgroups": [ 00:19:06.581 "null", 00:19:06.581 "ffdhe2048", 00:19:06.581 "ffdhe3072", 00:19:06.581 "ffdhe4096", 00:19:06.581 "ffdhe6144", 00:19:06.581 "ffdhe8192" 00:19:06.581 ] 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "nvmf_set_max_subsystems", 00:19:06.581 "params": { 00:19:06.581 "max_subsystems": 1024 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.581 "method": "nvmf_set_crdt", 00:19:06.581 "params": { 00:19:06.581 "crdt1": 0, 00:19:06.581 "crdt2": 0, 00:19:06.581 "crdt3": 0 00:19:06.581 } 00:19:06.581 }, 00:19:06.581 { 00:19:06.582 "method": "nvmf_create_transport", 00:19:06.582 "params": { 00:19:06.582 "trtype": "TCP", 00:19:06.582 "max_queue_depth": 128, 00:19:06.582 "max_io_qpairs_per_ctrlr": 127, 00:19:06.582 "in_capsule_data_size": 4096, 00:19:06.582 "max_io_size": 131072, 00:19:06.582 "io_unit_size": 131072, 00:19:06.582 "max_aq_depth": 128, 00:19:06.582 "num_shared_buffers": 511, 00:19:06.582 "buf_cache_size": 4294967295, 00:19:06.582 "dif_insert_or_strip": false, 00:19:06.582 "zcopy": false, 00:19:06.582 "c2h_success": false, 00:19:06.582 "sock_priority": 0, 00:19:06.582 "abort_timeout_sec": 1, 00:19:06.582 "ack_timeout": 0, 00:19:06.582 "data_wr_pool_size": 0 00:19:06.582 } 00:19:06.582 }, 00:19:06.582 { 00:19:06.582 "method": "nvmf_create_subsystem", 00:19:06.582 "params": { 00:19:06.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.582 "allow_any_host": false, 00:19:06.582 "serial_number": "00000000000000000000", 00:19:06.582 "model_number": "SPDK bdev Controller", 00:19:06.582 "max_namespaces": 32, 00:19:06.582 "min_cntlid": 1, 00:19:06.582 "max_cntlid": 65519, 00:19:06.582 "ana_reporting": false 00:19:06.582 } 00:19:06.582 }, 00:19:06.582 { 00:19:06.582 "method": "nvmf_subsystem_add_host", 00:19:06.582 "params": { 00:19:06.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.582 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.582 "psk": "key0" 00:19:06.582 } 00:19:06.582 }, 00:19:06.582 { 00:19:06.582 "method": "nvmf_subsystem_add_ns", 00:19:06.582 "params": { 00:19:06.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.582 "namespace": { 00:19:06.582 "nsid": 1, 00:19:06.582 "bdev_name": "malloc0", 00:19:06.582 "nguid": "A428584325BA49FEBE7B549D707D1333", 00:19:06.582 "uuid": "a4285843-25ba-49fe-be7b-549d707d1333", 00:19:06.582 "no_auto_visible": false 00:19:06.582 } 00:19:06.582 } 00:19:06.582 }, 00:19:06.582 { 00:19:06.582 "method": "nvmf_subsystem_add_listener", 00:19:06.582 "params": { 00:19:06.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.582 "listen_address": { 00:19:06.582 "trtype": "TCP", 00:19:06.582 "adrfam": "IPv4", 00:19:06.582 "traddr": "10.0.0.2", 00:19:06.582 "trsvcid": "4420" 00:19:06.582 }, 00:19:06.582 "secure_channel": false, 00:19:06.582 "sock_impl": "ssl" 00:19:06.582 } 00:19:06.582 } 00:19:06.582 ] 00:19:06.582 } 00:19:06.582 ] 00:19:06.582 }' 00:19:06.582 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:06.842 "subsystems": [ 00:19:06.842 { 00:19:06.842 "subsystem": "keyring", 00:19:06.842 "config": [ 00:19:06.842 { 00:19:06.842 "method": "keyring_file_add_key", 00:19:06.842 "params": { 00:19:06.842 "name": "key0", 00:19:06.842 "path": "/tmp/tmp.f36Fv1oopb" 00:19:06.842 } 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "iobuf", 00:19:06.842 "config": [ 00:19:06.842 { 00:19:06.842 "method": "iobuf_set_options", 00:19:06.842 "params": { 00:19:06.842 "small_pool_count": 8192, 00:19:06.842 "large_pool_count": 1024, 00:19:06.842 "small_bufsize": 8192, 00:19:06.842 "large_bufsize": 135168, 00:19:06.842 "enable_numa": false 00:19:06.842 } 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "sock", 00:19:06.842 "config": [ 00:19:06.842 { 00:19:06.842 "method": "sock_set_default_impl", 00:19:06.842 "params": { 00:19:06.842 "impl_name": "posix" 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "sock_impl_set_options", 00:19:06.842 "params": { 00:19:06.842 "impl_name": "ssl", 00:19:06.842 "recv_buf_size": 4096, 00:19:06.842 "send_buf_size": 4096, 00:19:06.842 "enable_recv_pipe": true, 00:19:06.842 "enable_quickack": false, 00:19:06.842 "enable_placement_id": 0, 00:19:06.842 "enable_zerocopy_send_server": true, 00:19:06.842 "enable_zerocopy_send_client": false, 00:19:06.842 "zerocopy_threshold": 0, 00:19:06.842 "tls_version": 0, 00:19:06.842 "enable_ktls": false 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "sock_impl_set_options", 00:19:06.842 "params": { 00:19:06.842 "impl_name": "posix", 00:19:06.842 "recv_buf_size": 2097152, 00:19:06.842 "send_buf_size": 2097152, 00:19:06.842 "enable_recv_pipe": true, 00:19:06.842 "enable_quickack": false, 00:19:06.842 "enable_placement_id": 0, 00:19:06.842 "enable_zerocopy_send_server": true, 00:19:06.842 "enable_zerocopy_send_client": false, 00:19:06.842 "zerocopy_threshold": 0, 00:19:06.842 "tls_version": 0, 00:19:06.842 "enable_ktls": false 00:19:06.842 } 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "vmd", 00:19:06.842 "config": [] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "accel", 00:19:06.842 "config": [ 00:19:06.842 { 00:19:06.842 "method": "accel_set_options", 00:19:06.842 "params": { 00:19:06.842 "small_cache_size": 128, 00:19:06.842 "large_cache_size": 16, 00:19:06.842 "task_count": 2048, 00:19:06.842 "sequence_count": 2048, 00:19:06.842 "buf_count": 2048 00:19:06.842 } 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "bdev", 00:19:06.842 "config": [ 00:19:06.842 { 00:19:06.842 "method": "bdev_set_options", 00:19:06.842 "params": { 00:19:06.842 "bdev_io_pool_size": 65535, 00:19:06.842 "bdev_io_cache_size": 256, 00:19:06.842 "bdev_auto_examine": true, 00:19:06.842 "iobuf_small_cache_size": 128, 00:19:06.842 "iobuf_large_cache_size": 16 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_raid_set_options", 00:19:06.842 "params": { 00:19:06.842 "process_window_size_kb": 1024, 00:19:06.842 "process_max_bandwidth_mb_sec": 0 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_iscsi_set_options", 00:19:06.842 "params": { 00:19:06.842 "timeout_sec": 30 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_nvme_set_options", 00:19:06.842 "params": { 00:19:06.842 "action_on_timeout": "none", 00:19:06.842 "timeout_us": 0, 00:19:06.842 "timeout_admin_us": 0, 00:19:06.842 "keep_alive_timeout_ms": 10000, 00:19:06.842 "arbitration_burst": 0, 00:19:06.842 "low_priority_weight": 0, 00:19:06.842 "medium_priority_weight": 0, 00:19:06.842 "high_priority_weight": 0, 00:19:06.842 "nvme_adminq_poll_period_us": 10000, 00:19:06.842 "nvme_ioq_poll_period_us": 0, 00:19:06.842 "io_queue_requests": 512, 00:19:06.842 "delay_cmd_submit": true, 00:19:06.842 "transport_retry_count": 4, 00:19:06.842 "bdev_retry_count": 3, 00:19:06.842 "transport_ack_timeout": 0, 00:19:06.842 "ctrlr_loss_timeout_sec": 0, 00:19:06.842 "reconnect_delay_sec": 0, 00:19:06.842 "fast_io_fail_timeout_sec": 0, 00:19:06.842 "disable_auto_failback": false, 00:19:06.842 "generate_uuids": false, 00:19:06.842 "transport_tos": 0, 00:19:06.842 "nvme_error_stat": false, 00:19:06.842 "rdma_srq_size": 0, 00:19:06.842 "io_path_stat": false, 00:19:06.842 "allow_accel_sequence": false, 00:19:06.842 "rdma_max_cq_size": 0, 00:19:06.842 "rdma_cm_event_timeout_ms": 0, 00:19:06.842 "dhchap_digests": [ 00:19:06.842 "sha256", 00:19:06.842 "sha384", 00:19:06.842 "sha512" 00:19:06.842 ], 00:19:06.842 "dhchap_dhgroups": [ 00:19:06.842 "null", 00:19:06.842 "ffdhe2048", 00:19:06.842 "ffdhe3072", 00:19:06.842 "ffdhe4096", 00:19:06.842 "ffdhe6144", 00:19:06.842 "ffdhe8192" 00:19:06.842 ] 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_nvme_attach_controller", 00:19:06.842 "params": { 00:19:06.842 "name": "nvme0", 00:19:06.842 "trtype": "TCP", 00:19:06.842 "adrfam": "IPv4", 00:19:06.842 "traddr": "10.0.0.2", 00:19:06.842 "trsvcid": "4420", 00:19:06.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.842 "prchk_reftag": false, 00:19:06.842 "prchk_guard": false, 00:19:06.842 "ctrlr_loss_timeout_sec": 0, 00:19:06.842 "reconnect_delay_sec": 0, 00:19:06.842 "fast_io_fail_timeout_sec": 0, 00:19:06.842 "psk": "key0", 00:19:06.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.842 "hdgst": false, 00:19:06.842 "ddgst": false, 00:19:06.842 "multipath": "multipath" 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_nvme_set_hotplug", 00:19:06.842 "params": { 00:19:06.842 "period_us": 100000, 00:19:06.842 "enable": false 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_enable_histogram", 00:19:06.842 "params": { 00:19:06.842 "name": "nvme0n1", 00:19:06.842 "enable": true 00:19:06.842 } 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "method": "bdev_wait_for_examine" 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }, 00:19:06.842 { 00:19:06.842 "subsystem": "nbd", 00:19:06.842 "config": [] 00:19:06.842 } 00:19:06.842 ] 00:19:06.842 }' 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2518466 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2518466 ']' 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2518466 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518466 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518466' 00:19:06.842 killing process with pid 2518466 00:19:06.842 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2518466 00:19:06.842 Received shutdown signal, test time was about 1.000000 seconds 00:19:06.842 00:19:06.842 Latency(us) 00:19:06.842 [2024-11-20T16:13:24.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.842 [2024-11-20T16:13:24.886Z] =================================================================================================================== 00:19:06.843 [2024-11-20T16:13:24.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.843 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2518466 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2518443 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2518443 ']' 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2518443 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.110 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518443 00:19:07.110 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.110 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.110 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518443' 00:19:07.110 killing process with pid 2518443 00:19:07.110 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2518443 00:19:07.110 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2518443 00:19:07.375 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:07.375 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.375 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.375 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:07.375 "subsystems": [ 00:19:07.375 { 00:19:07.375 "subsystem": "keyring", 00:19:07.375 "config": [ 00:19:07.375 { 00:19:07.375 "method": "keyring_file_add_key", 00:19:07.375 "params": { 00:19:07.375 "name": "key0", 00:19:07.375 "path": "/tmp/tmp.f36Fv1oopb" 00:19:07.375 } 00:19:07.375 } 00:19:07.375 ] 00:19:07.375 }, 00:19:07.375 { 00:19:07.375 "subsystem": "iobuf", 00:19:07.375 "config": [ 00:19:07.375 { 00:19:07.375 "method": "iobuf_set_options", 00:19:07.375 "params": { 00:19:07.375 "small_pool_count": 8192, 00:19:07.375 "large_pool_count": 1024, 00:19:07.375 "small_bufsize": 8192, 00:19:07.375 "large_bufsize": 135168, 00:19:07.375 "enable_numa": false 00:19:07.375 } 00:19:07.375 } 00:19:07.375 ] 00:19:07.375 }, 00:19:07.375 { 00:19:07.375 "subsystem": "sock", 00:19:07.375 "config": [ 00:19:07.375 { 00:19:07.375 "method": "sock_set_default_impl", 00:19:07.375 "params": { 00:19:07.375 "impl_name": "posix" 00:19:07.375 } 00:19:07.375 }, 00:19:07.375 { 00:19:07.376 "method": "sock_impl_set_options", 00:19:07.376 "params": { 00:19:07.376 "impl_name": "ssl", 00:19:07.376 "recv_buf_size": 4096, 00:19:07.376 "send_buf_size": 4096, 00:19:07.376 "enable_recv_pipe": true, 00:19:07.376 "enable_quickack": false, 00:19:07.376 "enable_placement_id": 0, 00:19:07.376 "enable_zerocopy_send_server": true, 00:19:07.376 "enable_zerocopy_send_client": false, 00:19:07.376 "zerocopy_threshold": 0, 00:19:07.376 "tls_version": 0, 00:19:07.376 "enable_ktls": false 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "sock_impl_set_options", 00:19:07.376 "params": { 00:19:07.376 "impl_name": "posix", 00:19:07.376 "recv_buf_size": 2097152, 00:19:07.376 "send_buf_size": 2097152, 00:19:07.376 "enable_recv_pipe": true, 00:19:07.376 "enable_quickack": false, 00:19:07.376 "enable_placement_id": 0, 00:19:07.376 "enable_zerocopy_send_server": true, 00:19:07.376 "enable_zerocopy_send_client": false, 00:19:07.376 "zerocopy_threshold": 0, 00:19:07.376 "tls_version": 0, 00:19:07.376 "enable_ktls": false 00:19:07.376 } 00:19:07.376 } 00:19:07.376 ] 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "subsystem": "vmd", 00:19:07.376 "config": [] 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "subsystem": "accel", 00:19:07.376 "config": [ 00:19:07.376 { 00:19:07.376 "method": "accel_set_options", 00:19:07.376 "params": { 00:19:07.376 "small_cache_size": 128, 00:19:07.376 "large_cache_size": 16, 00:19:07.376 "task_count": 2048, 00:19:07.376 "sequence_count": 2048, 00:19:07.376 "buf_count": 2048 00:19:07.376 } 00:19:07.376 } 00:19:07.376 ] 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "subsystem": "bdev", 00:19:07.376 "config": [ 00:19:07.376 { 00:19:07.376 "method": "bdev_set_options", 00:19:07.376 "params": { 00:19:07.376 "bdev_io_pool_size": 65535, 00:19:07.376 "bdev_io_cache_size": 256, 00:19:07.376 "bdev_auto_examine": true, 00:19:07.376 "iobuf_small_cache_size": 128, 00:19:07.376 "iobuf_large_cache_size": 16 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_raid_set_options", 00:19:07.376 "params": { 00:19:07.376 "process_window_size_kb": 1024, 00:19:07.376 "process_max_bandwidth_mb_sec": 0 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_iscsi_set_options", 00:19:07.376 "params": { 00:19:07.376 "timeout_sec": 30 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_nvme_set_options", 00:19:07.376 "params": { 00:19:07.376 "action_on_timeout": "none", 00:19:07.376 "timeout_us": 0, 00:19:07.376 "timeout_admin_us": 0, 00:19:07.376 "keep_alive_timeout_ms": 10000, 00:19:07.376 "arbitration_burst": 0, 00:19:07.376 "low_priority_weight": 0, 00:19:07.376 "medium_priority_weight": 0, 00:19:07.376 "high_priority_weight": 0, 00:19:07.376 "nvme_adminq_poll_period_us": 10000, 00:19:07.376 "nvme_ioq_poll_period_us": 0, 00:19:07.376 "io_queue_requests": 0, 00:19:07.376 "delay_cmd_submit": true, 00:19:07.376 "transport_retry_count": 4, 00:19:07.376 "bdev_retry_count": 3, 00:19:07.376 "transport_ack_timeout": 0, 00:19:07.376 "ctrlr_loss_timeout_sec": 0, 00:19:07.376 "reconnect_delay_sec": 0, 00:19:07.376 "fast_io_fail_timeout_sec": 0, 00:19:07.376 "disable_auto_failback": false, 00:19:07.376 "generate_uuids": false, 00:19:07.376 "transport_tos": 0, 00:19:07.376 "nvme_error_stat": false, 00:19:07.376 "rdma_srq_size": 0, 00:19:07.376 "io_path_stat": false, 00:19:07.376 "allow_accel_sequence": false, 00:19:07.376 "rdma_max_cq_size": 0, 00:19:07.376 "rdma_cm_event_timeout_ms": 0, 00:19:07.376 "dhchap_digests": [ 00:19:07.376 "sha256", 00:19:07.376 "sha384", 00:19:07.376 "sha512" 00:19:07.376 ], 00:19:07.376 "dhchap_dhgroups": [ 00:19:07.376 "null", 00:19:07.376 "ffdhe2048", 00:19:07.376 "ffdhe3072", 00:19:07.376 "ffdhe4096", 00:19:07.376 "ffdhe6144", 00:19:07.376 "ffdhe8192" 00:19:07.376 ] 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_nvme_set_hotplug", 00:19:07.376 "params": { 00:19:07.376 "period_us": 100000, 00:19:07.376 "enable": false 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_malloc_create", 00:19:07.376 "params": { 00:19:07.376 "name": "malloc0", 00:19:07.376 "num_blocks": 8192, 00:19:07.376 "block_size": 4096, 00:19:07.376 "physical_block_size": 4096, 00:19:07.376 "uuid": "a4285843-25ba-49fe-be7b-549d707d1333", 00:19:07.376 "optimal_io_boundary": 0, 00:19:07.376 "md_size": 0, 00:19:07.376 "dif_type": 0, 00:19:07.376 "dif_is_head_of_md": false, 00:19:07.376 "dif_pi_format": 0 00:19:07.376 } 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "method": "bdev_wait_for_examine" 00:19:07.376 } 00:19:07.376 ] 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "subsystem": "nbd", 00:19:07.376 "config": [] 00:19:07.376 }, 00:19:07.376 { 00:19:07.376 "subsystem": "scheduler", 00:19:07.376 "config": [ 00:19:07.376 { 00:19:07.376 "method": "framework_set_scheduler", 00:19:07.377 "params": { 00:19:07.377 "name": "static" 00:19:07.377 } 00:19:07.377 } 00:19:07.377 ] 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "subsystem": "nvmf", 00:19:07.377 "config": [ 00:19:07.377 { 00:19:07.377 "method": "nvmf_set_config", 00:19:07.377 "params": { 00:19:07.377 "discovery_filter": "match_any", 00:19:07.377 "admin_cmd_passthru": { 00:19:07.377 "identify_ctrlr": false 00:19:07.377 }, 00:19:07.377 "dhchap_digests": [ 00:19:07.377 "sha256", 00:19:07.377 "sha384", 00:19:07.377 "sha512" 00:19:07.377 ], 00:19:07.377 "dhchap_dhgroups": [ 00:19:07.377 "null", 00:19:07.377 "ffdhe2048", 00:19:07.377 "ffdhe3072", 00:19:07.377 "ffdhe4096", 00:19:07.377 "ffdhe6144", 00:19:07.377 "ffdhe8192" 00:19:07.377 ] 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_set_max_subsystems", 00:19:07.377 "params": { 00:19:07.377 "max_subsystems": 1024 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_set_crdt", 00:19:07.377 "params": { 00:19:07.377 "crdt1": 0, 00:19:07.377 "crdt2": 0, 00:19:07.377 "crdt3": 0 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_create_transport", 00:19:07.377 "params": { 00:19:07.377 "trtype": "TCP", 00:19:07.377 "max_queue_depth": 128, 00:19:07.377 "max_io_qpairs_per_ctrlr": 127, 00:19:07.377 "in_capsule_data_size": 4096, 00:19:07.377 "max_io_size": 131072, 00:19:07.377 "io_unit_size": 131072, 00:19:07.377 "max_aq_depth": 128, 00:19:07.377 "num_shared_buffers": 511, 00:19:07.377 "buf_cache_size": 4294967295, 00:19:07.377 "dif_insert_or_strip": false, 00:19:07.377 "zcopy": false, 00:19:07.377 "c2h_success": false, 00:19:07.377 "sock_priority": 0, 00:19:07.377 "abort_timeout_sec": 1, 00:19:07.377 "ack_timeout": 0, 00:19:07.377 "data_wr_pool_size": 0 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_create_subsystem", 00:19:07.377 "params": { 00:19:07.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.377 "allow_any_host": false, 00:19:07.377 "serial_number": "00000000000000000000", 00:19:07.377 "model_number": "SPDK bdev Controller", 00:19:07.377 "max_namespaces": 32, 00:19:07.377 "min_cntlid": 1, 00:19:07.377 "max_cntlid": 65519, 00:19:07.377 "ana_reporting": false 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_subsystem_add_host", 00:19:07.377 "params": { 00:19:07.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.377 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.377 "psk": "key0" 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_subsystem_add_ns", 00:19:07.377 "params": { 00:19:07.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.377 "namespace": { 00:19:07.377 "nsid": 1, 00:19:07.377 "bdev_name": "malloc0", 00:19:07.377 "nguid": "A428584325BA49FEBE7B549D707D1333", 00:19:07.377 "uuid": "a4285843-25ba-49fe-be7b-549d707d1333", 00:19:07.377 "no_auto_visible": false 00:19:07.377 } 00:19:07.377 } 00:19:07.377 }, 00:19:07.377 { 00:19:07.377 "method": "nvmf_subsystem_add_listener", 00:19:07.377 "params": { 00:19:07.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.377 "listen_address": { 00:19:07.377 "trtype": "TCP", 00:19:07.377 "adrfam": "IPv4", 00:19:07.377 "traddr": "10.0.0.2", 00:19:07.377 "trsvcid": "4420" 00:19:07.377 }, 00:19:07.377 "secure_channel": false, 00:19:07.377 "sock_impl": "ssl" 00:19:07.377 } 00:19:07.377 } 00:19:07.377 ] 00:19:07.377 } 00:19:07.377 ] 00:19:07.377 }' 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2518943 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2518943 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2518943 ']' 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.377 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.377 [2024-11-20 17:13:25.244917] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:07.377 [2024-11-20 17:13:25.244964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.377 [2024-11-20 17:13:25.323972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.377 [2024-11-20 17:13:25.363453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.377 [2024-11-20 17:13:25.363491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.377 [2024-11-20 17:13:25.363499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.378 [2024-11-20 17:13:25.363505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.378 [2024-11-20 17:13:25.363511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.378 [2024-11-20 17:13:25.364110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.637 [2024-11-20 17:13:25.577330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.637 [2024-11-20 17:13:25.609369] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.637 [2024-11-20 17:13:25.609597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2519179 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2519179 /var/tmp/bdevperf.sock 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2519179 ']' 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.205 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:08.205 "subsystems": [ 00:19:08.205 { 00:19:08.205 "subsystem": "keyring", 00:19:08.205 "config": [ 00:19:08.205 { 00:19:08.205 "method": "keyring_file_add_key", 00:19:08.205 "params": { 00:19:08.205 "name": "key0", 00:19:08.205 "path": "/tmp/tmp.f36Fv1oopb" 00:19:08.205 } 00:19:08.205 } 00:19:08.205 ] 00:19:08.205 }, 00:19:08.205 { 00:19:08.205 "subsystem": "iobuf", 00:19:08.205 "config": [ 00:19:08.205 { 00:19:08.205 "method": "iobuf_set_options", 00:19:08.205 "params": { 00:19:08.205 "small_pool_count": 8192, 00:19:08.205 "large_pool_count": 1024, 00:19:08.205 "small_bufsize": 8192, 00:19:08.205 "large_bufsize": 135168, 00:19:08.205 "enable_numa": false 00:19:08.205 } 00:19:08.205 } 00:19:08.205 ] 00:19:08.205 }, 00:19:08.205 { 00:19:08.205 "subsystem": "sock", 00:19:08.205 "config": [ 00:19:08.205 { 00:19:08.205 "method": "sock_set_default_impl", 00:19:08.205 "params": { 00:19:08.205 "impl_name": "posix" 00:19:08.205 } 00:19:08.205 }, 00:19:08.205 { 00:19:08.205 "method": "sock_impl_set_options", 00:19:08.205 "params": { 00:19:08.205 "impl_name": "ssl", 00:19:08.205 "recv_buf_size": 4096, 00:19:08.205 "send_buf_size": 4096, 00:19:08.205 "enable_recv_pipe": true, 00:19:08.205 "enable_quickack": false, 00:19:08.205 "enable_placement_id": 0, 00:19:08.206 "enable_zerocopy_send_server": true, 00:19:08.206 "enable_zerocopy_send_client": false, 00:19:08.206 "zerocopy_threshold": 0, 00:19:08.206 "tls_version": 0, 00:19:08.206 "enable_ktls": false 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "sock_impl_set_options", 00:19:08.206 "params": { 00:19:08.206 "impl_name": "posix", 00:19:08.206 "recv_buf_size": 2097152, 00:19:08.206 "send_buf_size": 2097152, 00:19:08.206 "enable_recv_pipe": true, 00:19:08.206 "enable_quickack": false, 00:19:08.206 "enable_placement_id": 0, 00:19:08.206 "enable_zerocopy_send_server": true, 00:19:08.206 "enable_zerocopy_send_client": false, 00:19:08.206 "zerocopy_threshold": 0, 00:19:08.206 "tls_version": 0, 00:19:08.206 "enable_ktls": false 00:19:08.206 } 00:19:08.206 } 00:19:08.206 ] 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "subsystem": "vmd", 00:19:08.206 "config": [] 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "subsystem": "accel", 00:19:08.206 "config": [ 00:19:08.206 { 00:19:08.206 "method": "accel_set_options", 00:19:08.206 "params": { 00:19:08.206 "small_cache_size": 128, 00:19:08.206 "large_cache_size": 16, 00:19:08.206 "task_count": 2048, 00:19:08.206 "sequence_count": 2048, 00:19:08.206 "buf_count": 2048 00:19:08.206 } 00:19:08.206 } 00:19:08.206 ] 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "subsystem": "bdev", 00:19:08.206 "config": [ 00:19:08.206 { 00:19:08.206 "method": "bdev_set_options", 00:19:08.206 "params": { 00:19:08.206 "bdev_io_pool_size": 65535, 00:19:08.206 "bdev_io_cache_size": 256, 00:19:08.206 "bdev_auto_examine": true, 00:19:08.206 "iobuf_small_cache_size": 128, 00:19:08.206 "iobuf_large_cache_size": 16 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_raid_set_options", 00:19:08.206 "params": { 00:19:08.206 "process_window_size_kb": 1024, 00:19:08.206 "process_max_bandwidth_mb_sec": 0 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_iscsi_set_options", 00:19:08.206 "params": { 00:19:08.206 "timeout_sec": 30 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_nvme_set_options", 00:19:08.206 "params": { 00:19:08.206 "action_on_timeout": "none", 00:19:08.206 "timeout_us": 0, 00:19:08.206 "timeout_admin_us": 0, 00:19:08.206 "keep_alive_timeout_ms": 10000, 00:19:08.206 "arbitration_burst": 0, 00:19:08.206 "low_priority_weight": 0, 00:19:08.206 "medium_priority_weight": 0, 00:19:08.206 "high_priority_weight": 0, 00:19:08.206 "nvme_adminq_poll_period_us": 10000, 00:19:08.206 "nvme_ioq_poll_period_us": 0, 00:19:08.206 "io_queue_requests": 512, 00:19:08.206 "delay_cmd_submit": true, 00:19:08.206 "transport_retry_count": 4, 00:19:08.206 "bdev_retry_count": 3, 00:19:08.206 "transport_ack_timeout": 0, 00:19:08.206 "ctrlr_loss_timeout_sec": 0, 00:19:08.206 "reconnect_delay_sec": 0, 00:19:08.206 "fast_io_fail_timeout_sec": 0, 00:19:08.206 "disable_auto_failback": false, 00:19:08.206 "generate_uuids": false, 00:19:08.206 "transport_tos": 0, 00:19:08.206 "nvme_error_stat": false, 00:19:08.206 "rdma_srq_size": 0, 00:19:08.206 "io_path_stat": false, 00:19:08.206 "allow_accel_sequence": false, 00:19:08.206 "rdma_max_cq_size": 0, 00:19:08.206 "rdma_cm_event_timeout_ms": 0, 00:19:08.206 "dhchap_digests": [ 00:19:08.206 "sha256", 00:19:08.206 "sha384", 00:19:08.206 "sha512" 00:19:08.206 ], 00:19:08.206 "dhchap_dhgroups": [ 00:19:08.206 "null", 00:19:08.206 "ffdhe2048", 00:19:08.206 "ffdhe3072", 00:19:08.206 "ffdhe4096", 00:19:08.206 "ffdhe6144", 00:19:08.206 "ffdhe8192" 00:19:08.206 ] 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_nvme_attach_controller", 00:19:08.206 "params": { 00:19:08.206 "name": "nvme0", 00:19:08.206 "trtype": "TCP", 00:19:08.206 "adrfam": "IPv4", 00:19:08.206 "traddr": "10.0.0.2", 00:19:08.206 "trsvcid": "4420", 00:19:08.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.206 "prchk_reftag": false, 00:19:08.206 "prchk_guard": false, 00:19:08.206 "ctrlr_loss_timeout_sec": 0, 00:19:08.206 "reconnect_delay_sec": 0, 00:19:08.206 "fast_io_fail_timeout_sec": 0, 00:19:08.206 "psk": "key0", 00:19:08.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.206 "hdgst": false, 00:19:08.206 "ddgst": false, 00:19:08.206 "multipath": "multipath" 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_nvme_set_hotplug", 00:19:08.206 "params": { 00:19:08.206 "period_us": 100000, 00:19:08.206 "enable": false 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_enable_histogram", 00:19:08.206 "params": { 00:19:08.206 "name": "nvme0n1", 00:19:08.206 "enable": true 00:19:08.206 } 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "method": "bdev_wait_for_examine" 00:19:08.206 } 00:19:08.206 ] 00:19:08.206 }, 00:19:08.206 { 00:19:08.206 "subsystem": "nbd", 00:19:08.206 "config": [] 00:19:08.206 } 00:19:08.206 ] 00:19:08.206 }' 00:19:08.206 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.206 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.206 [2024-11-20 17:13:26.173708] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:08.206 [2024-11-20 17:13:26.173756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519179 ] 00:19:08.465 [2024-11-20 17:13:26.248632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.465 [2024-11-20 17:13:26.288972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.465 [2024-11-20 17:13:26.442889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.032 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.032 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.032 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.032 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:09.291 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.291 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.291 Running I/O for 1 seconds... 00:19:10.669 5436.00 IOPS, 21.23 MiB/s 00:19:10.669 Latency(us) 00:19:10.669 [2024-11-20T16:13:28.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.669 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:10.669 Verification LBA range: start 0x0 length 0x2000 00:19:10.669 nvme0n1 : 1.01 5496.09 21.47 0.00 0.00 23133.34 5149.26 21720.50 00:19:10.669 [2024-11-20T16:13:28.712Z] =================================================================================================================== 00:19:10.669 [2024-11-20T16:13:28.712Z] Total : 5496.09 21.47 0.00 0.00 23133.34 5149.26 21720.50 00:19:10.669 { 00:19:10.669 "results": [ 00:19:10.669 { 00:19:10.669 "job": "nvme0n1", 00:19:10.669 "core_mask": "0x2", 00:19:10.669 "workload": "verify", 00:19:10.669 "status": "finished", 00:19:10.669 "verify_range": { 00:19:10.669 "start": 0, 00:19:10.669 "length": 8192 00:19:10.669 }, 00:19:10.669 "queue_depth": 128, 00:19:10.669 "io_size": 4096, 00:19:10.669 "runtime": 1.012538, 00:19:10.669 "iops": 5496.090023288015, 00:19:10.669 "mibps": 21.469101653468808, 00:19:10.669 "io_failed": 0, 00:19:10.669 "io_timeout": 0, 00:19:10.669 "avg_latency_us": 23133.344392932016, 00:19:10.669 "min_latency_us": 5149.257142857143, 00:19:10.669 "max_latency_us": 21720.502857142856 00:19:10.669 } 00:19:10.669 ], 00:19:10.669 "core_count": 1 00:19:10.669 } 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:10.669 nvmf_trace.0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2519179 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2519179 ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2519179 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519179 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519179' 00:19:10.669 killing process with pid 2519179 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2519179 00:19:10.669 Received shutdown signal, test time was about 1.000000 seconds 00:19:10.669 00:19:10.669 Latency(us) 00:19:10.669 [2024-11-20T16:13:28.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.669 [2024-11-20T16:13:28.712Z] =================================================================================================================== 00:19:10.669 [2024-11-20T16:13:28.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2519179 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.669 rmmod nvme_tcp 00:19:10.669 rmmod nvme_fabrics 00:19:10.669 rmmod nvme_keyring 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2518943 ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2518943 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2518943 ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2518943 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.669 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518943 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518943' 00:19:10.928 killing process with pid 2518943 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2518943 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2518943 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.928 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.465 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.465 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NjICG0R8JH /tmp/tmp.y1KGq1kASY /tmp/tmp.f36Fv1oopb 00:19:13.465 00:19:13.465 real 1m19.473s 00:19:13.465 user 2m0.930s 00:19:13.465 sys 0m31.276s 00:19:13.465 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.465 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.465 ************************************ 00:19:13.465 END TEST nvmf_tls 00:19:13.465 ************************************ 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.465 ************************************ 00:19:13.465 START TEST nvmf_fips 00:19:13.465 ************************************ 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:13.465 * Looking for test storage... 00:19:13.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.465 --rc genhtml_branch_coverage=1 00:19:13.465 --rc genhtml_function_coverage=1 00:19:13.465 --rc genhtml_legend=1 00:19:13.465 --rc geninfo_all_blocks=1 00:19:13.465 --rc geninfo_unexecuted_blocks=1 00:19:13.465 00:19:13.465 ' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.465 --rc genhtml_branch_coverage=1 00:19:13.465 --rc genhtml_function_coverage=1 00:19:13.465 --rc genhtml_legend=1 00:19:13.465 --rc geninfo_all_blocks=1 00:19:13.465 --rc geninfo_unexecuted_blocks=1 00:19:13.465 00:19:13.465 ' 00:19:13.465 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.465 --rc genhtml_branch_coverage=1 00:19:13.465 --rc genhtml_function_coverage=1 00:19:13.465 --rc genhtml_legend=1 00:19:13.466 --rc geninfo_all_blocks=1 00:19:13.466 --rc geninfo_unexecuted_blocks=1 00:19:13.466 00:19:13.466 ' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.466 --rc genhtml_branch_coverage=1 00:19:13.466 --rc genhtml_function_coverage=1 00:19:13.466 --rc genhtml_legend=1 00:19:13.466 --rc geninfo_all_blocks=1 00:19:13.466 --rc geninfo_unexecuted_blocks=1 00:19:13.466 00:19:13.466 ' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:13.466 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:13.466 Error setting digest 00:19:13.466 400267FECC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:13.467 400267FECC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.467 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:20.041 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:20.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:20.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:20.042 Found net devices under 0000:86:00.0: cvl_0_0 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:20.042 Found net devices under 0000:86:00.1: cvl_0_1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:20.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:19:20.042 00:19:20.042 --- 10.0.0.2 ping statistics --- 00:19:20.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.042 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:20.042 00:19:20.042 --- 10.0.0.1 ping statistics --- 00:19:20.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.042 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2523119 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2523119 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2523119 ']' 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.042 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.042 [2024-11-20 17:13:37.464114] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:20.042 [2024-11-20 17:13:37.464169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.042 [2024-11-20 17:13:37.545410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.042 [2024-11-20 17:13:37.587607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.042 [2024-11-20 17:13:37.587639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.042 [2024-11-20 17:13:37.587646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.042 [2024-11-20 17:13:37.587652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.042 [2024-11-20 17:13:37.587658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.042 [2024-11-20 17:13:37.588200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.301 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.301 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:20.301 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.RWD 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.RWD 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.RWD 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.RWD 00:19:20.302 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:20.560 [2024-11-20 17:13:38.488951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.560 [2024-11-20 17:13:38.504961] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.561 [2024-11-20 17:13:38.505138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.561 malloc0 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2523264 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2523264 /var/tmp/bdevperf.sock 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2523264 ']' 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.561 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.820 [2024-11-20 17:13:38.633840] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:20.820 [2024-11-20 17:13:38.633893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523264 ] 00:19:20.820 [2024-11-20 17:13:38.709195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.820 [2024-11-20 17:13:38.749154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.753 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.753 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:21.753 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.RWD 00:19:21.753 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.011 [2024-11-20 17:13:39.822614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.011 TLSTESTn1 00:19:22.011 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.011 Running I/O for 10 seconds... 00:19:23.957 5540.00 IOPS, 21.64 MiB/s [2024-11-20T16:13:43.377Z] 5548.50 IOPS, 21.67 MiB/s [2024-11-20T16:13:44.313Z] 5549.00 IOPS, 21.68 MiB/s [2024-11-20T16:13:45.250Z] 5565.75 IOPS, 21.74 MiB/s [2024-11-20T16:13:46.187Z] 5583.80 IOPS, 21.81 MiB/s [2024-11-20T16:13:47.123Z] 5506.50 IOPS, 21.51 MiB/s [2024-11-20T16:13:48.059Z] 5440.86 IOPS, 21.25 MiB/s [2024-11-20T16:13:49.434Z] 5392.12 IOPS, 21.06 MiB/s [2024-11-20T16:13:50.369Z] 5334.11 IOPS, 20.84 MiB/s [2024-11-20T16:13:50.369Z] 5310.50 IOPS, 20.74 MiB/s 00:19:32.326 Latency(us) 00:19:32.326 [2024-11-20T16:13:50.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.326 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.326 Verification LBA range: start 0x0 length 0x2000 00:19:32.326 TLSTESTn1 : 10.02 5313.55 20.76 0.00 0.00 24052.68 5991.86 32206.26 00:19:32.326 [2024-11-20T16:13:50.369Z] =================================================================================================================== 00:19:32.326 [2024-11-20T16:13:50.369Z] Total : 5313.55 20.76 0.00 0.00 24052.68 5991.86 32206.26 00:19:32.326 { 00:19:32.326 "results": [ 00:19:32.326 { 00:19:32.326 "job": "TLSTESTn1", 00:19:32.326 "core_mask": "0x4", 00:19:32.326 "workload": "verify", 00:19:32.326 "status": "finished", 00:19:32.326 "verify_range": { 00:19:32.326 "start": 0, 00:19:32.326 "length": 8192 00:19:32.326 }, 00:19:32.326 "queue_depth": 128, 00:19:32.326 "io_size": 4096, 00:19:32.326 "runtime": 10.018161, 00:19:32.326 "iops": 5313.550061732887, 00:19:32.326 "mibps": 20.75605492864409, 00:19:32.326 "io_failed": 0, 00:19:32.326 "io_timeout": 0, 00:19:32.326 "avg_latency_us": 24052.679382254857, 00:19:32.326 "min_latency_us": 5991.862857142857, 00:19:32.326 "max_latency_us": 32206.262857142858 00:19:32.326 } 00:19:32.326 ], 00:19:32.326 "core_count": 1 00:19:32.326 } 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:32.326 nvmf_trace.0 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2523264 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2523264 ']' 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2523264 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523264 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.326 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523264' 00:19:32.326 killing process with pid 2523264 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2523264 00:19:32.327 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.327 00:19:32.327 Latency(us) 00:19:32.327 [2024-11-20T16:13:50.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.327 [2024-11-20T16:13:50.370Z] =================================================================================================================== 00:19:32.327 [2024-11-20T16:13:50.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2523264 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.327 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.327 rmmod nvme_tcp 00:19:32.586 rmmod nvme_fabrics 00:19:32.586 rmmod nvme_keyring 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2523119 ']' 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2523119 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2523119 ']' 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2523119 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523119 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523119' 00:19:32.586 killing process with pid 2523119 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2523119 00:19:32.586 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2523119 00:19:32.845 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.845 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.845 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.845 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:32.845 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.846 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.RWD 00:19:34.752 00:19:34.752 real 0m21.655s 00:19:34.752 user 0m22.954s 00:19:34.752 sys 0m10.081s 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:34.752 ************************************ 00:19:34.752 END TEST nvmf_fips 00:19:34.752 ************************************ 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.752 ************************************ 00:19:34.752 START TEST nvmf_control_msg_list 00:19:34.752 ************************************ 00:19:34.752 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:35.012 * Looking for test storage... 00:19:35.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.012 --rc genhtml_branch_coverage=1 00:19:35.012 --rc genhtml_function_coverage=1 00:19:35.012 --rc genhtml_legend=1 00:19:35.012 --rc geninfo_all_blocks=1 00:19:35.012 --rc geninfo_unexecuted_blocks=1 00:19:35.012 00:19:35.012 ' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.012 --rc genhtml_branch_coverage=1 00:19:35.012 --rc genhtml_function_coverage=1 00:19:35.012 --rc genhtml_legend=1 00:19:35.012 --rc geninfo_all_blocks=1 00:19:35.012 --rc geninfo_unexecuted_blocks=1 00:19:35.012 00:19:35.012 ' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.012 --rc genhtml_branch_coverage=1 00:19:35.012 --rc genhtml_function_coverage=1 00:19:35.012 --rc genhtml_legend=1 00:19:35.012 --rc geninfo_all_blocks=1 00:19:35.012 --rc geninfo_unexecuted_blocks=1 00:19:35.012 00:19:35.012 ' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:35.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.012 --rc genhtml_branch_coverage=1 00:19:35.012 --rc genhtml_function_coverage=1 00:19:35.012 --rc genhtml_legend=1 00:19:35.012 --rc geninfo_all_blocks=1 00:19:35.012 --rc geninfo_unexecuted_blocks=1 00:19:35.012 00:19:35.012 ' 00:19:35.012 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.013 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.582 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:41.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:41.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:41.583 Found net devices under 0000:86:00.0: cvl_0_0 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:41.583 Found net devices under 0000:86:00.1: cvl_0_1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:41.583 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:41.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:19:41.583 00:19:41.583 --- 10.0.0.2 ping statistics --- 00:19:41.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.583 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:41.584 00:19:41.584 --- 10.0.0.1 ping statistics --- 00:19:41.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.584 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2528827 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2528827 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2528827 ']' 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.584 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 [2024-11-20 17:13:58.988247] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:41.584 [2024-11-20 17:13:58.988292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.584 [2024-11-20 17:13:59.064285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.584 [2024-11-20 17:13:59.104711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.584 [2024-11-20 17:13:59.104745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.584 [2024-11-20 17:13:59.104752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.584 [2024-11-20 17:13:59.104758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.584 [2024-11-20 17:13:59.104763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.584 [2024-11-20 17:13:59.105329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 [2024-11-20 17:13:59.241924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 Malloc0 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:41.584 [2024-11-20 17:13:59.282246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2528869 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2528870 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2528871 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.584 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2528869 00:19:41.584 [2024-11-20 17:13:59.380985] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.584 [2024-11-20 17:13:59.381164] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.584 [2024-11-20 17:13:59.381327] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:42.521 Initializing NVMe Controllers 00:19:42.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:42.521 Initialization complete. Launching workers. 00:19:42.521 ======================================================== 00:19:42.521 Latency(us) 00:19:42.521 Device Information : IOPS MiB/s Average min max 00:19:42.521 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5258.00 20.54 189.83 129.61 536.27 00:19:42.521 ======================================================== 00:19:42.521 Total : 5258.00 20.54 189.83 129.61 536.27 00:19:42.521 00:19:42.836 Initializing NVMe Controllers 00:19:42.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:42.836 Initialization complete. Launching workers. 00:19:42.836 ======================================================== 00:19:42.836 Latency(us) 00:19:42.836 Device Information : IOPS MiB/s Average min max 00:19:42.836 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4119.00 16.09 242.36 136.25 488.54 00:19:42.836 ======================================================== 00:19:42.836 Total : 4119.00 16.09 242.36 136.25 488.54 00:19:42.836 00:19:42.836 Initializing NVMe Controllers 00:19:42.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:42.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:42.836 Initialization complete. Launching workers. 00:19:42.836 ======================================================== 00:19:42.836 Latency(us) 00:19:42.836 Device Information : IOPS MiB/s Average min max 00:19:42.836 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4893.99 19.12 203.97 133.97 480.36 00:19:42.836 ======================================================== 00:19:42.836 Total : 4893.99 19.12 203.97 133.97 480.36 00:19:42.836 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2528870 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2528871 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.836 rmmod nvme_tcp 00:19:42.836 rmmod nvme_fabrics 00:19:42.836 rmmod nvme_keyring 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2528827 ']' 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2528827 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2528827 ']' 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2528827 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2528827 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2528827' 00:19:42.836 killing process with pid 2528827 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2528827 00:19:42.836 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2528827 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.194 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.098 00:19:45.098 real 0m10.156s 00:19:45.098 user 0m6.613s 00:19:45.098 sys 0m5.660s 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:45.098 ************************************ 00:19:45.098 END TEST nvmf_control_msg_list 00:19:45.098 ************************************ 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.098 17:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.098 ************************************ 00:19:45.098 START TEST nvmf_wait_for_buf 00:19:45.098 ************************************ 00:19:45.098 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:45.098 * Looking for test storage... 00:19:45.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.098 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.098 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.098 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.358 --rc genhtml_branch_coverage=1 00:19:45.358 --rc genhtml_function_coverage=1 00:19:45.358 --rc genhtml_legend=1 00:19:45.358 --rc geninfo_all_blocks=1 00:19:45.358 --rc geninfo_unexecuted_blocks=1 00:19:45.358 00:19:45.358 ' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.358 --rc genhtml_branch_coverage=1 00:19:45.358 --rc genhtml_function_coverage=1 00:19:45.358 --rc genhtml_legend=1 00:19:45.358 --rc geninfo_all_blocks=1 00:19:45.358 --rc geninfo_unexecuted_blocks=1 00:19:45.358 00:19:45.358 ' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.358 --rc genhtml_branch_coverage=1 00:19:45.358 --rc genhtml_function_coverage=1 00:19:45.358 --rc genhtml_legend=1 00:19:45.358 --rc geninfo_all_blocks=1 00:19:45.358 --rc geninfo_unexecuted_blocks=1 00:19:45.358 00:19:45.358 ' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.358 --rc genhtml_branch_coverage=1 00:19:45.358 --rc genhtml_function_coverage=1 00:19:45.358 --rc genhtml_legend=1 00:19:45.358 --rc geninfo_all_blocks=1 00:19:45.358 --rc geninfo_unexecuted_blocks=1 00:19:45.358 00:19:45.358 ' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.358 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.359 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.929 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:51.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:51.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:51.930 Found net devices under 0000:86:00.0: cvl_0_0 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:51.930 Found net devices under 0000:86:00.1: cvl_0_1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.930 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:19:51.930 00:19:51.930 --- 10.0.0.2 ping statistics --- 00:19:51.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.930 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:51.930 00:19:51.930 --- 10.0.0.1 ping statistics --- 00:19:51.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.930 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2532637 00:19:51.930 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2532637 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2532637 ']' 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.931 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.931 [2024-11-20 17:14:09.216891] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:19:51.931 [2024-11-20 17:14:09.216943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.931 [2024-11-20 17:14:09.295367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.931 [2024-11-20 17:14:09.336604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.931 [2024-11-20 17:14:09.336637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.931 [2024-11-20 17:14:09.336644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.931 [2024-11-20 17:14:09.336650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.931 [2024-11-20 17:14:09.336655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.931 [2024-11-20 17:14:09.337229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 Malloc0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 [2024-11-20 17:14:10.186481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.190 [2024-11-20 17:14:10.214706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.190 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.449 [2024-11-20 17:14:10.298898] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:53.826 Initializing NVMe Controllers 00:19:53.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:53.827 Initialization complete. Launching workers. 00:19:53.827 ======================================================== 00:19:53.827 Latency(us) 00:19:53.827 Device Information : IOPS MiB/s Average min max 00:19:53.827 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33571.11 23974.86 63843.10 00:19:53.827 ======================================================== 00:19:53.827 Total : 124.00 15.50 33571.11 23974.86 63843.10 00:19:53.827 00:19:53.827 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:53.827 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:53.827 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.827 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:53.827 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.087 rmmod nvme_tcp 00:19:54.087 rmmod nvme_fabrics 00:19:54.087 rmmod nvme_keyring 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2532637 ']' 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2532637 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2532637 ']' 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2532637 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2532637 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2532637' 00:19:54.087 killing process with pid 2532637 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2532637 00:19:54.087 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2532637 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.346 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.253 00:19:56.253 real 0m11.201s 00:19:56.253 user 0m4.864s 00:19:56.253 sys 0m4.951s 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:56.253 ************************************ 00:19:56.253 END TEST nvmf_wait_for_buf 00:19:56.253 ************************************ 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.253 17:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:02.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:02.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:02.822 Found net devices under 0000:86:00.0: cvl_0_0 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:02.822 Found net devices under 0000:86:00.1: cvl_0_1 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:02.822 17:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.823 17:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.823 17:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.823 ************************************ 00:20:02.823 START TEST nvmf_perf_adq 00:20:02.823 ************************************ 00:20:02.823 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:02.823 * Looking for test storage... 00:20:02.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.823 --rc genhtml_branch_coverage=1 00:20:02.823 --rc genhtml_function_coverage=1 00:20:02.823 --rc genhtml_legend=1 00:20:02.823 --rc geninfo_all_blocks=1 00:20:02.823 --rc geninfo_unexecuted_blocks=1 00:20:02.823 00:20:02.823 ' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.823 --rc genhtml_branch_coverage=1 00:20:02.823 --rc genhtml_function_coverage=1 00:20:02.823 --rc genhtml_legend=1 00:20:02.823 --rc geninfo_all_blocks=1 00:20:02.823 --rc geninfo_unexecuted_blocks=1 00:20:02.823 00:20:02.823 ' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.823 --rc genhtml_branch_coverage=1 00:20:02.823 --rc genhtml_function_coverage=1 00:20:02.823 --rc genhtml_legend=1 00:20:02.823 --rc geninfo_all_blocks=1 00:20:02.823 --rc geninfo_unexecuted_blocks=1 00:20:02.823 00:20:02.823 ' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.823 --rc genhtml_branch_coverage=1 00:20:02.823 --rc genhtml_function_coverage=1 00:20:02.823 --rc genhtml_legend=1 00:20:02.823 --rc geninfo_all_blocks=1 00:20:02.823 --rc geninfo_unexecuted_blocks=1 00:20:02.823 00:20:02.823 ' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.823 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.824 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.101 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.101 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.101 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.101 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.102 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:08.102 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:09.038 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:11.574 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:16.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:16.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:16.849 Found net devices under 0000:86:00.0: cvl_0_0 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.849 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:16.850 Found net devices under 0000:86:00.1: cvl_0_1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:16.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:20:16.850 00:20:16.850 --- 10.0.0.2 ping statistics --- 00:20:16.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.850 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:20:16.850 00:20:16.850 --- 10.0.0.1 ping statistics --- 00:20:16.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.850 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2540990 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2540990 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2540990 ']' 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.850 [2024-11-20 17:14:34.419714] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:20:16.850 [2024-11-20 17:14:34.419767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.850 [2024-11-20 17:14:34.497640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.850 [2024-11-20 17:14:34.541581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.850 [2024-11-20 17:14:34.541616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.850 [2024-11-20 17:14:34.541623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.850 [2024-11-20 17:14:34.541630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.850 [2024-11-20 17:14:34.541635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.850 [2024-11-20 17:14:34.543246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.850 [2024-11-20 17:14:34.543338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.850 [2024-11-20 17:14:34.543446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.850 [2024-11-20 17:14:34.543447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.850 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 [2024-11-20 17:14:34.758061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 Malloc1 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.851 [2024-11-20 17:14:34.823797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2541230 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:16.851 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:19.381 "tick_rate": 2100000000, 00:20:19.381 "poll_groups": [ 00:20:19.381 { 00:20:19.381 "name": "nvmf_tgt_poll_group_000", 00:20:19.381 "admin_qpairs": 1, 00:20:19.381 "io_qpairs": 1, 00:20:19.381 "current_admin_qpairs": 1, 00:20:19.381 "current_io_qpairs": 1, 00:20:19.381 "pending_bdev_io": 0, 00:20:19.381 "completed_nvme_io": 19541, 00:20:19.381 "transports": [ 00:20:19.381 { 00:20:19.381 "trtype": "TCP" 00:20:19.381 } 00:20:19.381 ] 00:20:19.381 }, 00:20:19.381 { 00:20:19.381 "name": "nvmf_tgt_poll_group_001", 00:20:19.381 "admin_qpairs": 0, 00:20:19.381 "io_qpairs": 1, 00:20:19.381 "current_admin_qpairs": 0, 00:20:19.381 "current_io_qpairs": 1, 00:20:19.381 "pending_bdev_io": 0, 00:20:19.381 "completed_nvme_io": 20134, 00:20:19.381 "transports": [ 00:20:19.381 { 00:20:19.381 "trtype": "TCP" 00:20:19.381 } 00:20:19.381 ] 00:20:19.381 }, 00:20:19.381 { 00:20:19.381 "name": "nvmf_tgt_poll_group_002", 00:20:19.381 "admin_qpairs": 0, 00:20:19.381 "io_qpairs": 1, 00:20:19.381 "current_admin_qpairs": 0, 00:20:19.381 "current_io_qpairs": 1, 00:20:19.381 "pending_bdev_io": 0, 00:20:19.381 "completed_nvme_io": 19453, 00:20:19.381 "transports": [ 00:20:19.381 { 00:20:19.381 "trtype": "TCP" 00:20:19.381 } 00:20:19.381 ] 00:20:19.381 }, 00:20:19.381 { 00:20:19.381 "name": "nvmf_tgt_poll_group_003", 00:20:19.381 "admin_qpairs": 0, 00:20:19.381 "io_qpairs": 1, 00:20:19.381 "current_admin_qpairs": 0, 00:20:19.381 "current_io_qpairs": 1, 00:20:19.381 "pending_bdev_io": 0, 00:20:19.381 "completed_nvme_io": 19410, 00:20:19.381 "transports": [ 00:20:19.381 { 00:20:19.381 "trtype": "TCP" 00:20:19.381 } 00:20:19.381 ] 00:20:19.381 } 00:20:19.381 ] 00:20:19.381 }' 00:20:19.381 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:19.382 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:19.382 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:19.382 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:19.382 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2541230 00:20:27.493 Initializing NVMe Controllers 00:20:27.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:27.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:27.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:27.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:27.493 Initialization complete. Launching workers. 00:20:27.493 ======================================================== 00:20:27.493 Latency(us) 00:20:27.493 Device Information : IOPS MiB/s Average min max 00:20:27.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10373.70 40.52 6170.94 2466.47 10351.21 00:20:27.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10687.50 41.75 5989.44 2079.98 10149.99 00:20:27.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10440.10 40.78 6130.23 2380.35 10483.71 00:20:27.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10490.10 40.98 6101.66 2011.02 11040.96 00:20:27.494 ======================================================== 00:20:27.494 Total : 41991.38 164.03 6097.32 2011.02 11040.96 00:20:27.494 00:20:27.494 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:27.494 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.494 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.494 rmmod nvme_tcp 00:20:27.494 rmmod nvme_fabrics 00:20:27.494 rmmod nvme_keyring 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2540990 ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2540990 ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540990' 00:20:27.494 killing process with pid 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2540990 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.494 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.399 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.399 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:29.399 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:29.399 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:30.777 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:32.680 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.958 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.958 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.958 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.958 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.958 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:20:37.959 00:20:37.959 --- 10.0.0.2 ping statistics --- 00:20:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.959 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:37.959 00:20:37.959 --- 10.0.0.1 ping statistics --- 00:20:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.959 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:37.959 net.core.busy_poll = 1 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:37.959 net.core.busy_read = 1 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:37.959 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2544980 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2544980 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2544980 ']' 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.218 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.218 [2024-11-20 17:14:56.138470] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:20:38.218 [2024-11-20 17:14:56.138519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.218 [2024-11-20 17:14:56.216257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.218 [2024-11-20 17:14:56.257327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.218 [2024-11-20 17:14:56.257364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.218 [2024-11-20 17:14:56.257372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.218 [2024-11-20 17:14:56.257379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.218 [2024-11-20 17:14:56.257384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.477 [2024-11-20 17:14:56.259025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.477 [2024-11-20 17:14:56.259134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.477 [2024-11-20 17:14:56.259242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.477 [2024-11-20 17:14:56.259242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.477 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.478 [2024-11-20 17:14:56.468655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.478 Malloc1 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.478 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.736 [2024-11-20 17:14:56.531981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2545051 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:38.736 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:40.643 "tick_rate": 2100000000, 00:20:40.643 "poll_groups": [ 00:20:40.643 { 00:20:40.643 "name": "nvmf_tgt_poll_group_000", 00:20:40.643 "admin_qpairs": 1, 00:20:40.643 "io_qpairs": 1, 00:20:40.643 "current_admin_qpairs": 1, 00:20:40.643 "current_io_qpairs": 1, 00:20:40.643 "pending_bdev_io": 0, 00:20:40.643 "completed_nvme_io": 23709, 00:20:40.643 "transports": [ 00:20:40.643 { 00:20:40.643 "trtype": "TCP" 00:20:40.643 } 00:20:40.643 ] 00:20:40.643 }, 00:20:40.643 { 00:20:40.643 "name": "nvmf_tgt_poll_group_001", 00:20:40.643 "admin_qpairs": 0, 00:20:40.643 "io_qpairs": 3, 00:20:40.643 "current_admin_qpairs": 0, 00:20:40.643 "current_io_qpairs": 3, 00:20:40.643 "pending_bdev_io": 0, 00:20:40.643 "completed_nvme_io": 31762, 00:20:40.643 "transports": [ 00:20:40.643 { 00:20:40.643 "trtype": "TCP" 00:20:40.643 } 00:20:40.643 ] 00:20:40.643 }, 00:20:40.643 { 00:20:40.643 "name": "nvmf_tgt_poll_group_002", 00:20:40.643 "admin_qpairs": 0, 00:20:40.643 "io_qpairs": 0, 00:20:40.643 "current_admin_qpairs": 0, 00:20:40.643 "current_io_qpairs": 0, 00:20:40.643 "pending_bdev_io": 0, 00:20:40.643 "completed_nvme_io": 0, 00:20:40.643 "transports": [ 00:20:40.643 { 00:20:40.643 "trtype": "TCP" 00:20:40.643 } 00:20:40.643 ] 00:20:40.643 }, 00:20:40.643 { 00:20:40.643 "name": "nvmf_tgt_poll_group_003", 00:20:40.643 "admin_qpairs": 0, 00:20:40.643 "io_qpairs": 0, 00:20:40.643 "current_admin_qpairs": 0, 00:20:40.643 "current_io_qpairs": 0, 00:20:40.643 "pending_bdev_io": 0, 00:20:40.643 "completed_nvme_io": 0, 00:20:40.643 "transports": [ 00:20:40.643 { 00:20:40.643 "trtype": "TCP" 00:20:40.643 } 00:20:40.643 ] 00:20:40.643 } 00:20:40.643 ] 00:20:40.643 }' 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:40.643 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2545051 00:20:48.764 Initializing NVMe Controllers 00:20:48.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:48.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:48.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:48.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:48.764 Initialization complete. Launching workers. 00:20:48.764 ======================================================== 00:20:48.764 Latency(us) 00:20:48.764 Device Information : IOPS MiB/s Average min max 00:20:48.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14761.60 57.66 4335.41 1473.36 46993.99 00:20:48.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5491.63 21.45 11656.58 1602.82 59387.74 00:20:48.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5370.83 20.98 11954.50 1558.99 58207.56 00:20:48.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5094.23 19.90 12617.57 1556.85 58720.74 00:20:48.764 ======================================================== 00:20:48.764 Total : 30718.29 119.99 8349.87 1473.36 59387.74 00:20:48.764 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.764 rmmod nvme_tcp 00:20:48.764 rmmod nvme_fabrics 00:20:48.764 rmmod nvme_keyring 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2544980 ']' 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2544980 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2544980 ']' 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2544980 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:48.764 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544980 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544980' 00:20:49.023 killing process with pid 2544980 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2544980 00:20:49.023 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2544980 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.023 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:52.313 00:20:52.313 real 0m50.176s 00:20:52.313 user 2m44.070s 00:20:52.313 sys 0m10.349s 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.313 ************************************ 00:20:52.313 END TEST nvmf_perf_adq 00:20:52.313 ************************************ 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:52.313 ************************************ 00:20:52.313 START TEST nvmf_shutdown 00:20:52.313 ************************************ 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:52.313 * Looking for test storage... 00:20:52.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.313 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.574 --rc genhtml_branch_coverage=1 00:20:52.574 --rc genhtml_function_coverage=1 00:20:52.574 --rc genhtml_legend=1 00:20:52.574 --rc geninfo_all_blocks=1 00:20:52.574 --rc geninfo_unexecuted_blocks=1 00:20:52.574 00:20:52.574 ' 00:20:52.574 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.574 --rc genhtml_branch_coverage=1 00:20:52.574 --rc genhtml_function_coverage=1 00:20:52.574 --rc genhtml_legend=1 00:20:52.574 --rc geninfo_all_blocks=1 00:20:52.574 --rc geninfo_unexecuted_blocks=1 00:20:52.574 00:20:52.574 ' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.575 --rc genhtml_branch_coverage=1 00:20:52.575 --rc genhtml_function_coverage=1 00:20:52.575 --rc genhtml_legend=1 00:20:52.575 --rc geninfo_all_blocks=1 00:20:52.575 --rc geninfo_unexecuted_blocks=1 00:20:52.575 00:20:52.575 ' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.575 --rc genhtml_branch_coverage=1 00:20:52.575 --rc genhtml_function_coverage=1 00:20:52.575 --rc genhtml_legend=1 00:20:52.575 --rc geninfo_all_blocks=1 00:20:52.575 --rc geninfo_unexecuted_blocks=1 00:20:52.575 00:20:52.575 ' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.575 ************************************ 00:20:52.575 START TEST nvmf_shutdown_tc1 00:20:52.575 ************************************ 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.575 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.262 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.262 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.262 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.262 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:59.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:20:59.263 00:20:59.263 --- 10.0.0.2 ping statistics --- 00:20:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.263 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:20:59.263 00:20:59.263 --- 10.0.0.1 ping statistics --- 00:20:59.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.263 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2551013 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2551013 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2551013 ']' 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-20 17:15:16.505626] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:20:59.263 [2024-11-20 17:15:16.505675] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.263 [2024-11-20 17:15:16.583301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.263 [2024-11-20 17:15:16.623716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.263 [2024-11-20 17:15:16.623755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.263 [2024-11-20 17:15:16.623762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.263 [2024-11-20 17:15:16.623767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.263 [2024-11-20 17:15:16.623772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.263 [2024-11-20 17:15:16.625389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.263 [2024-11-20 17:15:16.625499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.263 [2024-11-20 17:15:16.625583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.263 [2024-11-20 17:15:16.625584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 [2024-11-20 17:15:16.775248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.263 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.263 Malloc1 00:20:59.263 [2024-11-20 17:15:16.893036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.263 Malloc2 00:20:59.263 Malloc3 00:20:59.263 Malloc4 00:20:59.263 Malloc5 00:20:59.263 Malloc6 00:20:59.263 Malloc7 00:20:59.263 Malloc8 00:20:59.263 Malloc9 00:20:59.263 Malloc10 00:20:59.263 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.264 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:59.264 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.264 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2551139 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2551139 /var/tmp/bdevperf.sock 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2551139 ']' 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.523 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.524 { 00:20:59.524 "params": { 00:20:59.524 "name": "Nvme$subsystem", 00:20:59.524 "trtype": "$TEST_TRANSPORT", 00:20:59.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.524 "adrfam": "ipv4", 00:20:59.524 "trsvcid": "$NVMF_PORT", 00:20:59.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.524 "hdgst": ${hdgst:-false}, 00:20:59.524 "ddgst": ${ddgst:-false} 00:20:59.524 }, 00:20:59.524 "method": "bdev_nvme_attach_controller" 00:20:59.524 } 00:20:59.524 EOF 00:20:59.524 )") 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.524 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.525 { 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme$subsystem", 00:20:59.525 "trtype": "$TEST_TRANSPORT", 00:20:59.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "$NVMF_PORT", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.525 "hdgst": ${hdgst:-false}, 00:20:59.525 "ddgst": ${ddgst:-false} 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 } 00:20:59.525 EOF 00:20:59.525 )") 00:20:59.525 [2024-11-20 17:15:17.364302] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:20:59.525 [2024-11-20 17:15:17.364351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.525 { 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme$subsystem", 00:20:59.525 "trtype": "$TEST_TRANSPORT", 00:20:59.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "$NVMF_PORT", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.525 "hdgst": ${hdgst:-false}, 00:20:59.525 "ddgst": ${ddgst:-false} 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 } 00:20:59.525 EOF 00:20:59.525 )") 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.525 { 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme$subsystem", 00:20:59.525 "trtype": "$TEST_TRANSPORT", 00:20:59.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "$NVMF_PORT", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.525 "hdgst": ${hdgst:-false}, 00:20:59.525 "ddgst": ${ddgst:-false} 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 } 00:20:59.525 EOF 00:20:59.525 )") 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:59.525 { 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme$subsystem", 00:20:59.525 "trtype": "$TEST_TRANSPORT", 00:20:59.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "$NVMF_PORT", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.525 "hdgst": ${hdgst:-false}, 00:20:59.525 "ddgst": ${ddgst:-false} 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 } 00:20:59.525 EOF 00:20:59.525 )") 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:59.525 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme1", 00:20:59.525 "trtype": "tcp", 00:20:59.525 "traddr": "10.0.0.2", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "4420", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.525 "hdgst": false, 00:20:59.525 "ddgst": false 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 },{ 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme2", 00:20:59.525 "trtype": "tcp", 00:20:59.525 "traddr": "10.0.0.2", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "4420", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.525 "hdgst": false, 00:20:59.525 "ddgst": false 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 },{ 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme3", 00:20:59.525 "trtype": "tcp", 00:20:59.525 "traddr": "10.0.0.2", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "4420", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.525 "hdgst": false, 00:20:59.525 "ddgst": false 00:20:59.525 }, 00:20:59.525 "method": "bdev_nvme_attach_controller" 00:20:59.525 },{ 00:20:59.525 "params": { 00:20:59.525 "name": "Nvme4", 00:20:59.525 "trtype": "tcp", 00:20:59.525 "traddr": "10.0.0.2", 00:20:59.525 "adrfam": "ipv4", 00:20:59.525 "trsvcid": "4420", 00:20:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.525 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.525 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme5", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme6", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme7", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme8", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme9", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 },{ 00:20:59.526 "params": { 00:20:59.526 "name": "Nvme10", 00:20:59.526 "trtype": "tcp", 00:20:59.526 "traddr": "10.0.0.2", 00:20:59.526 "adrfam": "ipv4", 00:20:59.526 "trsvcid": "4420", 00:20:59.526 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.526 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.526 "hdgst": false, 00:20:59.526 "ddgst": false 00:20:59.526 }, 00:20:59.526 "method": "bdev_nvme_attach_controller" 00:20:59.526 }' 00:20:59.526 [2024-11-20 17:15:17.442086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.526 [2024-11-20 17:15:17.483150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2551139 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:01.428 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:02.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2551139 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2551013 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.364 { 00:21:02.364 "params": { 00:21:02.364 "name": "Nvme$subsystem", 00:21:02.364 "trtype": "$TEST_TRANSPORT", 00:21:02.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.364 "adrfam": "ipv4", 00:21:02.364 "trsvcid": "$NVMF_PORT", 00:21:02.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.364 "hdgst": ${hdgst:-false}, 00:21:02.364 "ddgst": ${ddgst:-false} 00:21:02.364 }, 00:21:02.364 "method": "bdev_nvme_attach_controller" 00:21:02.364 } 00:21:02.364 EOF 00:21:02.364 )") 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.364 { 00:21:02.364 "params": { 00:21:02.364 "name": "Nvme$subsystem", 00:21:02.364 "trtype": "$TEST_TRANSPORT", 00:21:02.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.364 "adrfam": "ipv4", 00:21:02.364 "trsvcid": "$NVMF_PORT", 00:21:02.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.364 "hdgst": ${hdgst:-false}, 00:21:02.364 "ddgst": ${ddgst:-false} 00:21:02.364 }, 00:21:02.364 "method": "bdev_nvme_attach_controller" 00:21:02.364 } 00:21:02.364 EOF 00:21:02.364 )") 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.364 { 00:21:02.364 "params": { 00:21:02.364 "name": "Nvme$subsystem", 00:21:02.364 "trtype": "$TEST_TRANSPORT", 00:21:02.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.364 "adrfam": "ipv4", 00:21:02.364 "trsvcid": "$NVMF_PORT", 00:21:02.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.364 "hdgst": ${hdgst:-false}, 00:21:02.364 "ddgst": ${ddgst:-false} 00:21:02.364 }, 00:21:02.364 "method": "bdev_nvme_attach_controller" 00:21:02.364 } 00:21:02.364 EOF 00:21:02.364 )") 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.364 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.364 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 [2024-11-20 17:15:20.309756] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:02.365 [2024-11-20 17:15:20.309810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551684 ] 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:02.365 { 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme$subsystem", 00:21:02.365 "trtype": "$TEST_TRANSPORT", 00:21:02.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "$NVMF_PORT", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.365 "hdgst": ${hdgst:-false}, 00:21:02.365 "ddgst": ${ddgst:-false} 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 } 00:21:02.365 EOF 00:21:02.365 )") 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:02.365 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme1", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme2", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme3", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme4", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme5", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme6", 00:21:02.365 "trtype": "tcp", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "adrfam": "ipv4", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:02.365 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:02.365 "hdgst": false, 00:21:02.365 "ddgst": false 00:21:02.365 }, 00:21:02.365 "method": "bdev_nvme_attach_controller" 00:21:02.365 },{ 00:21:02.365 "params": { 00:21:02.365 "name": "Nvme7", 00:21:02.365 "trtype": "tcp", 00:21:02.366 "traddr": "10.0.0.2", 00:21:02.366 "adrfam": "ipv4", 00:21:02.366 "trsvcid": "4420", 00:21:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:02.366 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:02.366 "hdgst": false, 00:21:02.366 "ddgst": false 00:21:02.366 }, 00:21:02.366 "method": "bdev_nvme_attach_controller" 00:21:02.366 },{ 00:21:02.366 "params": { 00:21:02.366 "name": "Nvme8", 00:21:02.366 "trtype": "tcp", 00:21:02.366 "traddr": "10.0.0.2", 00:21:02.366 "adrfam": "ipv4", 00:21:02.366 "trsvcid": "4420", 00:21:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:02.366 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:02.366 "hdgst": false, 00:21:02.366 "ddgst": false 00:21:02.366 }, 00:21:02.366 "method": "bdev_nvme_attach_controller" 00:21:02.366 },{ 00:21:02.366 "params": { 00:21:02.366 "name": "Nvme9", 00:21:02.366 "trtype": "tcp", 00:21:02.366 "traddr": "10.0.0.2", 00:21:02.366 "adrfam": "ipv4", 00:21:02.366 "trsvcid": "4420", 00:21:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:02.366 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:02.366 "hdgst": false, 00:21:02.366 "ddgst": false 00:21:02.366 }, 00:21:02.366 "method": "bdev_nvme_attach_controller" 00:21:02.366 },{ 00:21:02.366 "params": { 00:21:02.366 "name": "Nvme10", 00:21:02.366 "trtype": "tcp", 00:21:02.366 "traddr": "10.0.0.2", 00:21:02.366 "adrfam": "ipv4", 00:21:02.366 "trsvcid": "4420", 00:21:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:02.366 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:02.366 "hdgst": false, 00:21:02.366 "ddgst": false 00:21:02.366 }, 00:21:02.366 "method": "bdev_nvme_attach_controller" 00:21:02.366 }' 00:21:02.366 [2024-11-20 17:15:20.392231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.625 [2024-11-20 17:15:20.433982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.000 Running I/O for 1 seconds... 00:21:04.938 2248.00 IOPS, 140.50 MiB/s 00:21:04.938 Latency(us) 00:21:04.938 [2024-11-20T16:15:22.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.938 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme1n1 : 1.14 279.56 17.47 0.00 0.00 226477.79 16602.45 209715.20 00:21:04.938 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme2n1 : 1.16 280.66 17.54 0.00 0.00 221658.12 9299.87 214708.42 00:21:04.938 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme3n1 : 1.13 282.39 17.65 0.00 0.00 216734.28 13793.77 211712.49 00:21:04.938 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme4n1 : 1.15 281.79 17.61 0.00 0.00 214859.00 6272.73 203723.34 00:21:04.938 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme5n1 : 1.16 274.96 17.18 0.00 0.00 217410.02 15978.30 228689.43 00:21:04.938 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme6n1 : 1.17 273.07 17.07 0.00 0.00 216690.10 17226.61 226692.14 00:21:04.938 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme7n1 : 1.16 276.58 17.29 0.00 0.00 210609.49 25590.25 219701.64 00:21:04.938 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme8n1 : 1.17 274.14 17.13 0.00 0.00 209671.61 14230.67 217704.35 00:21:04.938 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme9n1 : 1.17 272.63 17.04 0.00 0.00 207891.89 16477.62 219701.64 00:21:04.938 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.938 Verification LBA range: start 0x0 length 0x400 00:21:04.938 Nvme10n1 : 1.18 272.01 17.00 0.00 0.00 205303.61 16727.28 233682.65 00:21:04.938 [2024-11-20T16:15:22.981Z] =================================================================================================================== 00:21:04.938 [2024-11-20T16:15:22.981Z] Total : 2767.78 172.99 0.00 0.00 214741.51 6272.73 233682.65 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.197 rmmod nvme_tcp 00:21:05.197 rmmod nvme_fabrics 00:21:05.197 rmmod nvme_keyring 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:05.197 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2551013 ']' 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2551013 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2551013 ']' 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2551013 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551013 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551013' 00:21:05.198 killing process with pid 2551013 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2551013 00:21:05.198 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2551013 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.766 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.674 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.674 00:21:07.674 real 0m15.168s 00:21:07.674 user 0m33.323s 00:21:07.674 sys 0m5.903s 00:21:07.674 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.674 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 ************************************ 00:21:07.675 END TEST nvmf_shutdown_tc1 00:21:07.675 ************************************ 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.675 ************************************ 00:21:07.675 START TEST nvmf_shutdown_tc2 00:21:07.675 ************************************ 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.675 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.675 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.675 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.675 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:07.675 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.676 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:21:07.935 00:21:07.935 --- 10.0.0.2 ping statistics --- 00:21:07.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.935 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:07.935 00:21:07.935 --- 10.0.0.1 ping statistics --- 00:21:07.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.935 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:07.935 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2552722 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2552722 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2552722 ']' 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.193 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.193 [2024-11-20 17:15:26.065385] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:08.193 [2024-11-20 17:15:26.065434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.193 [2024-11-20 17:15:26.145320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.193 [2024-11-20 17:15:26.188288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.193 [2024-11-20 17:15:26.188321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.193 [2024-11-20 17:15:26.188328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.193 [2024-11-20 17:15:26.188334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.193 [2024-11-20 17:15:26.188339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.193 [2024-11-20 17:15:26.189733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.193 [2024-11-20 17:15:26.189826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.193 [2024-11-20 17:15:26.189934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.193 [2024-11-20 17:15:26.189935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.125 [2024-11-20 17:15:26.942039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.125 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.125 Malloc1 00:21:09.125 [2024-11-20 17:15:27.045735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.125 Malloc2 00:21:09.125 Malloc3 00:21:09.125 Malloc4 00:21:09.384 Malloc5 00:21:09.384 Malloc6 00:21:09.384 Malloc7 00:21:09.384 Malloc8 00:21:09.384 Malloc9 00:21:09.384 Malloc10 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2553029 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2553029 /var/tmp/bdevperf.sock 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2553029 ']' 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.643 { 00:21:09.643 "params": { 00:21:09.643 "name": "Nvme$subsystem", 00:21:09.643 "trtype": "$TEST_TRANSPORT", 00:21:09.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.643 "adrfam": "ipv4", 00:21:09.643 "trsvcid": "$NVMF_PORT", 00:21:09.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.643 "hdgst": ${hdgst:-false}, 00:21:09.643 "ddgst": ${ddgst:-false} 00:21:09.643 }, 00:21:09.643 "method": "bdev_nvme_attach_controller" 00:21:09.643 } 00:21:09.643 EOF 00:21:09.643 )") 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.643 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.643 { 00:21:09.643 "params": { 00:21:09.643 "name": "Nvme$subsystem", 00:21:09.643 "trtype": "$TEST_TRANSPORT", 00:21:09.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.643 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 [2024-11-20 17:15:27.520113] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:09.644 [2024-11-20 17:15:27.520165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553029 ] 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.644 "trsvcid": "$NVMF_PORT", 00:21:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.644 "hdgst": ${hdgst:-false}, 00:21:09.644 "ddgst": ${ddgst:-false} 00:21:09.644 }, 00:21:09.644 "method": "bdev_nvme_attach_controller" 00:21:09.644 } 00:21:09.644 EOF 00:21:09.644 )") 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:09.644 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:09.644 { 00:21:09.644 "params": { 00:21:09.644 "name": "Nvme$subsystem", 00:21:09.644 "trtype": "$TEST_TRANSPORT", 00:21:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.644 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "$NVMF_PORT", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.645 "hdgst": ${hdgst:-false}, 00:21:09.645 "ddgst": ${ddgst:-false} 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 } 00:21:09.645 EOF 00:21:09.645 )") 00:21:09.645 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:09.645 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:09.645 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:09.645 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme1", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme2", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme3", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme4", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme5", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme6", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme7", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme8", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme9", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 },{ 00:21:09.645 "params": { 00:21:09.645 "name": "Nvme10", 00:21:09.645 "trtype": "tcp", 00:21:09.645 "traddr": "10.0.0.2", 00:21:09.645 "adrfam": "ipv4", 00:21:09.645 "trsvcid": "4420", 00:21:09.645 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:09.645 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:09.645 "hdgst": false, 00:21:09.645 "ddgst": false 00:21:09.645 }, 00:21:09.645 "method": "bdev_nvme_attach_controller" 00:21:09.645 }' 00:21:09.645 [2024-11-20 17:15:27.606908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.645 [2024-11-20 17:15:27.648251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.548 Running I/O for 10 seconds... 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.548 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:11.549 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:11.549 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:11.808 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:11.808 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:11.808 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2553029 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2553029 ']' 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2553029 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2553029 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2553029' 00:21:11.809 killing process with pid 2553029 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2553029 00:21:11.809 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2553029 00:21:11.809 Received shutdown signal, test time was about 0.614240 seconds 00:21:11.809 00:21:11.809 Latency(us) 00:21:11.809 [2024-11-20T16:15:29.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.809 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme1n1 : 0.60 321.10 20.07 0.00 0.00 195891.20 26214.40 198730.12 00:21:11.809 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme2n1 : 0.61 316.20 19.76 0.00 0.00 193954.38 15978.30 196732.83 00:21:11.809 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme3n1 : 0.60 319.51 19.97 0.00 0.00 186860.17 26464.06 188743.68 00:21:11.809 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme4n1 : 0.61 317.03 19.81 0.00 0.00 183297.46 15853.47 200727.41 00:21:11.809 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme5n1 : 0.61 314.47 19.65 0.00 0.00 179817.00 18474.91 215707.06 00:21:11.809 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme6n1 : 0.58 221.06 13.82 0.00 0.00 246364.65 16852.11 200727.41 00:21:11.809 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme7n1 : 0.57 222.88 13.93 0.00 0.00 236310.43 30084.14 208716.56 00:21:11.809 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme8n1 : 0.61 312.91 19.56 0.00 0.00 165539.03 15042.07 192738.26 00:21:11.809 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme9n1 : 0.59 217.04 13.56 0.00 0.00 229006.87 17101.78 218702.99 00:21:11.809 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.809 Verification LBA range: start 0x0 length 0x400 00:21:11.809 Nvme10n1 : 0.59 215.67 13.48 0.00 0.00 223289.54 17226.61 241671.80 00:21:11.809 [2024-11-20T16:15:29.852Z] =================================================================================================================== 00:21:11.809 [2024-11-20T16:15:29.852Z] Total : 2777.85 173.62 0.00 0.00 199462.33 15042.07 241671.80 00:21:12.068 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2552722 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.005 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.005 rmmod nvme_tcp 00:21:13.005 rmmod nvme_fabrics 00:21:13.005 rmmod nvme_keyring 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2552722 ']' 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2552722 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2552722 ']' 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2552722 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.005 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552722 00:21:13.264 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.264 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.264 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552722' 00:21:13.264 killing process with pid 2552722 00:21:13.264 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2552722 00:21:13.264 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2552722 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.524 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.060 00:21:16.060 real 0m7.828s 00:21:16.060 user 0m23.478s 00:21:16.060 sys 0m1.300s 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.060 ************************************ 00:21:16.060 END TEST nvmf_shutdown_tc2 00:21:16.060 ************************************ 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.060 ************************************ 00:21:16.060 START TEST nvmf_shutdown_tc3 00:21:16.060 ************************************ 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.060 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:16.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:16.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:16.061 Found net devices under 0000:86:00.0: cvl_0_0 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:16.061 Found net devices under 0000:86:00.1: cvl_0_1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.061 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:21:16.062 00:21:16.062 --- 10.0.0.2 ping statistics --- 00:21:16.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.062 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:21:16.062 00:21:16.062 --- 10.0.0.1 ping statistics --- 00:21:16.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.062 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2554126 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2554126 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2554126 ']' 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.062 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.062 [2024-11-20 17:15:33.982085] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:16.062 [2024-11-20 17:15:33.982137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.062 [2024-11-20 17:15:34.060517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.321 [2024-11-20 17:15:34.101309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.321 [2024-11-20 17:15:34.101343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.321 [2024-11-20 17:15:34.101350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.321 [2024-11-20 17:15:34.101356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.321 [2024-11-20 17:15:34.101361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.321 [2024-11-20 17:15:34.103003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.321 [2024-11-20 17:15:34.103139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.321 [2024-11-20 17:15:34.103242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.321 [2024-11-20 17:15:34.103242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.889 [2024-11-20 17:15:34.862949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.889 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.148 Malloc1 00:21:17.148 [2024-11-20 17:15:34.971503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.148 Malloc2 00:21:17.148 Malloc3 00:21:17.148 Malloc4 00:21:17.148 Malloc5 00:21:17.148 Malloc6 00:21:17.408 Malloc7 00:21:17.408 Malloc8 00:21:17.408 Malloc9 00:21:17.408 Malloc10 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2554404 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2554404 /var/tmp/bdevperf.sock 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2554404 ']' 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.408 { 00:21:17.408 "params": { 00:21:17.408 "name": "Nvme$subsystem", 00:21:17.408 "trtype": "$TEST_TRANSPORT", 00:21:17.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.408 "adrfam": "ipv4", 00:21:17.408 "trsvcid": "$NVMF_PORT", 00:21:17.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.408 "hdgst": ${hdgst:-false}, 00:21:17.408 "ddgst": ${ddgst:-false} 00:21:17.408 }, 00:21:17.408 "method": "bdev_nvme_attach_controller" 00:21:17.408 } 00:21:17.408 EOF 00:21:17.408 )") 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.408 { 00:21:17.408 "params": { 00:21:17.408 "name": "Nvme$subsystem", 00:21:17.408 "trtype": "$TEST_TRANSPORT", 00:21:17.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.408 "adrfam": "ipv4", 00:21:17.408 "trsvcid": "$NVMF_PORT", 00:21:17.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.408 "hdgst": ${hdgst:-false}, 00:21:17.408 "ddgst": ${ddgst:-false} 00:21:17.408 }, 00:21:17.408 "method": "bdev_nvme_attach_controller" 00:21:17.408 } 00:21:17.408 EOF 00:21:17.408 )") 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.408 { 00:21:17.408 "params": { 00:21:17.408 "name": "Nvme$subsystem", 00:21:17.408 "trtype": "$TEST_TRANSPORT", 00:21:17.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.408 "adrfam": "ipv4", 00:21:17.408 "trsvcid": "$NVMF_PORT", 00:21:17.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.408 "hdgst": ${hdgst:-false}, 00:21:17.408 "ddgst": ${ddgst:-false} 00:21:17.408 }, 00:21:17.408 "method": "bdev_nvme_attach_controller" 00:21:17.408 } 00:21:17.408 EOF 00:21:17.408 )") 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.408 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.408 { 00:21:17.408 "params": { 00:21:17.408 "name": "Nvme$subsystem", 00:21:17.408 "trtype": "$TEST_TRANSPORT", 00:21:17.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.408 "adrfam": "ipv4", 00:21:17.408 "trsvcid": "$NVMF_PORT", 00:21:17.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.408 "hdgst": ${hdgst:-false}, 00:21:17.408 "ddgst": ${ddgst:-false} 00:21:17.408 }, 00:21:17.409 "method": "bdev_nvme_attach_controller" 00:21:17.409 } 00:21:17.409 EOF 00:21:17.409 )") 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.409 { 00:21:17.409 "params": { 00:21:17.409 "name": "Nvme$subsystem", 00:21:17.409 "trtype": "$TEST_TRANSPORT", 00:21:17.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.409 "adrfam": "ipv4", 00:21:17.409 "trsvcid": "$NVMF_PORT", 00:21:17.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.409 "hdgst": ${hdgst:-false}, 00:21:17.409 "ddgst": ${ddgst:-false} 00:21:17.409 }, 00:21:17.409 "method": "bdev_nvme_attach_controller" 00:21:17.409 } 00:21:17.409 EOF 00:21:17.409 )") 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.409 { 00:21:17.409 "params": { 00:21:17.409 "name": "Nvme$subsystem", 00:21:17.409 "trtype": "$TEST_TRANSPORT", 00:21:17.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.409 "adrfam": "ipv4", 00:21:17.409 "trsvcid": "$NVMF_PORT", 00:21:17.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.409 "hdgst": ${hdgst:-false}, 00:21:17.409 "ddgst": ${ddgst:-false} 00:21:17.409 }, 00:21:17.409 "method": "bdev_nvme_attach_controller" 00:21:17.409 } 00:21:17.409 EOF 00:21:17.409 )") 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.409 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.668 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.669 { 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme$subsystem", 00:21:17.669 "trtype": "$TEST_TRANSPORT", 00:21:17.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "$NVMF_PORT", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.669 "hdgst": ${hdgst:-false}, 00:21:17.669 "ddgst": ${ddgst:-false} 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 } 00:21:17.669 EOF 00:21:17.669 )") 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.669 [2024-11-20 17:15:35.450946] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:17.669 [2024-11-20 17:15:35.450997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554404 ] 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.669 { 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme$subsystem", 00:21:17.669 "trtype": "$TEST_TRANSPORT", 00:21:17.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "$NVMF_PORT", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.669 "hdgst": ${hdgst:-false}, 00:21:17.669 "ddgst": ${ddgst:-false} 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 } 00:21:17.669 EOF 00:21:17.669 )") 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.669 { 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme$subsystem", 00:21:17.669 "trtype": "$TEST_TRANSPORT", 00:21:17.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "$NVMF_PORT", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.669 "hdgst": ${hdgst:-false}, 00:21:17.669 "ddgst": ${ddgst:-false} 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 } 00:21:17.669 EOF 00:21:17.669 )") 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.669 { 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme$subsystem", 00:21:17.669 "trtype": "$TEST_TRANSPORT", 00:21:17.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "$NVMF_PORT", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.669 "hdgst": ${hdgst:-false}, 00:21:17.669 "ddgst": ${ddgst:-false} 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 } 00:21:17.669 EOF 00:21:17.669 )") 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:17.669 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme1", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme2", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme3", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme4", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme5", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme6", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme7", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.669 "adrfam": "ipv4", 00:21:17.669 "trsvcid": "4420", 00:21:17.669 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:17.669 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:17.669 "hdgst": false, 00:21:17.669 "ddgst": false 00:21:17.669 }, 00:21:17.669 "method": "bdev_nvme_attach_controller" 00:21:17.669 },{ 00:21:17.669 "params": { 00:21:17.669 "name": "Nvme8", 00:21:17.669 "trtype": "tcp", 00:21:17.669 "traddr": "10.0.0.2", 00:21:17.670 "adrfam": "ipv4", 00:21:17.670 "trsvcid": "4420", 00:21:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:17.670 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:17.670 "hdgst": false, 00:21:17.670 "ddgst": false 00:21:17.670 }, 00:21:17.670 "method": "bdev_nvme_attach_controller" 00:21:17.670 },{ 00:21:17.670 "params": { 00:21:17.670 "name": "Nvme9", 00:21:17.670 "trtype": "tcp", 00:21:17.670 "traddr": "10.0.0.2", 00:21:17.670 "adrfam": "ipv4", 00:21:17.670 "trsvcid": "4420", 00:21:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:17.670 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:17.670 "hdgst": false, 00:21:17.670 "ddgst": false 00:21:17.670 }, 00:21:17.670 "method": "bdev_nvme_attach_controller" 00:21:17.670 },{ 00:21:17.670 "params": { 00:21:17.670 "name": "Nvme10", 00:21:17.670 "trtype": "tcp", 00:21:17.670 "traddr": "10.0.0.2", 00:21:17.670 "adrfam": "ipv4", 00:21:17.670 "trsvcid": "4420", 00:21:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:17.670 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:17.670 "hdgst": false, 00:21:17.670 "ddgst": false 00:21:17.670 }, 00:21:17.670 "method": "bdev_nvme_attach_controller" 00:21:17.670 }' 00:21:17.670 [2024-11-20 17:15:35.526879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.670 [2024-11-20 17:15:35.567698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.574 Running I/O for 10 seconds... 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:19.574 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=16 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 16 -ge 100 ']' 00:21:19.575 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2554126 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2554126 ']' 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2554126 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554126 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554126' 00:21:19.849 killing process with pid 2554126 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2554126 00:21:19.849 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2554126 00:21:19.849 [2024-11-20 17:15:37.739803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.849 [2024-11-20 17:15:37.739962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.739968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.739979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.739986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.739992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.739999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.740274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022850 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.741472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129ae30 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.850 [2024-11-20 17:15:37.742632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.742983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022d20 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.851 [2024-11-20 17:15:37.744571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.744661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10231f0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.745997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10236e0 is same with the state(6) to be set 00:21:19.852 [2024-11-20 17:15:37.746238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.852 [2024-11-20 17:15:37.746268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.852 [2024-11-20 17:15:37.746278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.852 [2024-11-20 17:15:37.746285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.852 [2024-11-20 17:15:37.746292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4830 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654fe0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6602c0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6551e0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.853 [2024-11-20 17:15:37.746680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.853 [2024-11-20 17:15:37.746686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6611b0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.746999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.747005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.747011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.747017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.853 [2024-11-20 17:15:37.747023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023bb0 is same with the state(6) to be set 00:21:19.854 [2024-11-20 17:15:37.747396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.854 [2024-11-20 17:15:37.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.854 [2024-11-20 17:15:37.747768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.747985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.747994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024080 is same with [2024-11-20 17:15:37.748075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:19.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024080 is same with the state(6) to be set 00:21:19.855 [2024-11-20 17:15:37.748092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-11-20 17:15:37.748100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024080 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 the state(6) to be set 00:21:19.855 [2024-11-20 17:15:37.748110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024080 is same with [2024-11-20 17:15:37.748120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1the state(6) to be set 00:21:19.855 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-11-20 17:15:37.748296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.855 [2024-11-20 17:15:37.748304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:15:37.748817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12the state(6) to be set 00:21:19.856 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.748894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:19.856 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.748937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12the state(6) to be set 00:21:19.856 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.748974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12the state(6) to be set 00:21:19.856 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.748989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.748993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-11-20 17:15:37.748996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.749000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.856 [2024-11-20 17:15:37.749003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.856 [2024-11-20 17:15:37.749009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-11-20 17:15:37.749010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:15:37.749018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12[2024-11-20 17:15:37.749049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.749058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:19.857 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12[2024-11-20 17:15:37.749209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 17:15:37.749217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with [2024-11-20 17:15:37.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12the state(6) to be set 00:21:19.857 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024550 is same with the state(6) to be set 00:21:19.857 [2024-11-20 17:15:37.749314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.857 [2024-11-20 17:15:37.749403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-11-20 17:15:37.749409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.749703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-11-20 17:15:37.749709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.858 [2024-11-20 17:15:37.750080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.858 [2024-11-20 17:15:37.750266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.750476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1024a40 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.751019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.859 [2024-11-20 17:15:37.751033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129a960 is same with the state(6) to be set 00:21:19.859 [2024-11-20 17:15:37.751052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4830 (9): Bad file descriptor 00:21:19.859 [2024-11-20 17:15:37.752175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-11-20 17:15:37.752506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-11-20 17:15:37.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.752986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.752993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-11-20 17:15:37.753080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.860 [2024-11-20 17:15:37.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.861 [2024-11-20 17:15:37.753094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.753103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.861 [2024-11-20 17:15:37.753109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.753117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.861 [2024-11-20 17:15:37.753124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.754426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:19.861 [2024-11-20 17:15:37.754483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c030 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.755980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:19.861 [2024-11-20 17:15:37.756033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575610 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.756155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.756168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4830 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.756175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4830 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.756253] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.756299] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.756345] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.756389] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.756950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:19.861 [2024-11-20 17:15:37.756991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe140 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.757106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8c030 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.757113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c030 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.757130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4830 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf3c0 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.757243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x654fe0 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.861 [2024-11-20 17:15:37.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.861 [2024-11-20 17:15:37.757323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81f20 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.757337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6602c0 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6551e0 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6611b0 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757689] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.757902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.757917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x575610 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.757925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575610 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.757942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c030 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.757952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.757959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.861 [2024-11-20 17:15:37.757967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.861 [2024-11-20 17:15:37.757975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.861 [2024-11-20 17:15:37.758064] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.861 [2024-11-20 17:15:37.758228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.758240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe140 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.758247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe140 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.758255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575610 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.758264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.758270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:19.861 [2024-11-20 17:15:37.758277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:19.861 [2024-11-20 17:15:37.758284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:19.861 [2024-11-20 17:15:37.758330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe140 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.758339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.758345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:19.861 [2024-11-20 17:15:37.758351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:19.861 [2024-11-20 17:15:37.758357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:19.861 [2024-11-20 17:15:37.758386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.758392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:19.861 [2024-11-20 17:15:37.758398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:19.861 [2024-11-20 17:15:37.758404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:19.861 [2024-11-20 17:15:37.764591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.861 [2024-11-20 17:15:37.764849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.764862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4830 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.764870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4830 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.764899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4830 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.764928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.764935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.861 [2024-11-20 17:15:37.764942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.861 [2024-11-20 17:15:37.764949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.861 [2024-11-20 17:15:37.766270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:19.861 [2024-11-20 17:15:37.766529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.861 [2024-11-20 17:15:37.766545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8c030 with addr=10.0.0.2, port=4420 00:21:19.861 [2024-11-20 17:15:37.766553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c030 is same with the state(6) to be set 00:21:19.861 [2024-11-20 17:15:37.766583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c030 (9): Bad file descriptor 00:21:19.861 [2024-11-20 17:15:37.766612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:19.861 [2024-11-20 17:15:37.766619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:19.862 [2024-11-20 17:15:37.766625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:19.862 [2024-11-20 17:15:37.766632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:19.862 [2024-11-20 17:15:37.766991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf3c0 (9): Bad file descriptor 00:21:19.862 [2024-11-20 17:15:37.767014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa81f20 (9): Bad file descriptor 00:21:19.862 [2024-11-20 17:15:37.767117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.862 [2024-11-20 17:15:37.767671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.862 [2024-11-20 17:15:37.767679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.767987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.767995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.768010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.768024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.768054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.768068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.768076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8651a0 is same with the state(6) to be set 00:21:19.863 [2024-11-20 17:15:37.769056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.863 [2024-11-20 17:15:37.769172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.863 [2024-11-20 17:15:37.769179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.864 [2024-11-20 17:15:37.769758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.864 [2024-11-20 17:15:37.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.769988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.769995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.770003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.770009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.770016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8661d0 is same with the state(6) to be set 00:21:19.865 [2024-11-20 17:15:37.771008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.865 [2024-11-20 17:15:37.771337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.865 [2024-11-20 17:15:37.771345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.866 [2024-11-20 17:15:37.771916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.866 [2024-11-20 17:15:37.771922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.771930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.771945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.771951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.771958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x867260 is same with the state(6) to be set 00:21:19.867 [2024-11-20 17:15:37.772977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.772990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.867 [2024-11-20 17:15:37.773456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.867 [2024-11-20 17:15:37.773464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.868 [2024-11-20 17:15:37.773914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.868 [2024-11-20 17:15:37.773921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65eb0 is same with the state(6) to be set 00:21:19.868 [2024-11-20 17:15:37.774883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.868 [2024-11-20 17:15:37.774901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:19.868 [2024-11-20 17:15:37.774911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:19.868 [2024-11-20 17:15:37.774924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:19.868 [2024-11-20 17:15:37.775277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.868 [2024-11-20 17:15:37.775293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6611b0 with addr=10.0.0.2, port=4420 00:21:19.868 [2024-11-20 17:15:37.775301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6611b0 is same with the state(6) to be set 00:21:19.868 [2024-11-20 17:15:37.775430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.868 [2024-11-20 17:15:37.775440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6551e0 with addr=10.0.0.2, port=4420 00:21:19.868 [2024-11-20 17:15:37.775446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6551e0 is same with the state(6) to be set 00:21:19.868 [2024-11-20 17:15:37.775587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.868 [2024-11-20 17:15:37.775596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6602c0 with addr=10.0.0.2, port=4420 00:21:19.868 [2024-11-20 17:15:37.775603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6602c0 is same with the state(6) to be set 00:21:19.868 [2024-11-20 17:15:37.775797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.868 [2024-11-20 17:15:37.775807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x654fe0 with addr=10.0.0.2, port=4420 00:21:19.868 [2024-11-20 17:15:37.775813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654fe0 is same with the state(6) to be set 00:21:19.868 [2024-11-20 17:15:37.776703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:19.868 [2024-11-20 17:15:37.776715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:19.869 [2024-11-20 17:15:37.776725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.869 [2024-11-20 17:15:37.776751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6611b0 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.776760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6551e0 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.776769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6602c0 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.776777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x654fe0 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.776964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.869 [2024-11-20 17:15:37.776977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x575610 with addr=10.0.0.2, port=4420 00:21:19.869 [2024-11-20 17:15:37.776984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575610 is same with the state(6) to be set 00:21:19.869 [2024-11-20 17:15:37.777197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.869 [2024-11-20 17:15:37.777209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe140 with addr=10.0.0.2, port=4420 00:21:19.869 [2024-11-20 17:15:37.777218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe140 is same with the state(6) to be set 00:21:19.869 [2024-11-20 17:15:37.777338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.869 [2024-11-20 17:15:37.777348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4830 with addr=10.0.0.2, port=4420 00:21:19.869 [2024-11-20 17:15:37.777355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4830 is same with the state(6) to be set 00:21:19.869 [2024-11-20 17:15:37.777362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:19.869 [2024-11-20 17:15:37.777524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575610 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.777533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe140 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.777541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4830 (9): Bad file descriptor 00:21:19.869 [2024-11-20 17:15:37.777814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.869 [2024-11-20 17:15:37.777827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8c030 with addr=10.0.0.2, port=4420 00:21:19.869 [2024-11-20 17:15:37.777834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c030 is same with the state(6) to be set 00:21:19.869 [2024-11-20 17:15:37.777840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.869 [2024-11-20 17:15:37.777897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.869 [2024-11-20 17:15:37.777903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.869 [2024-11-20 17:15:37.777909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.869 [2024-11-20 17:15:37.777963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.777972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.777983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.777990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.777999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.869 [2024-11-20 17:15:37.778220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.869 [2024-11-20 17:15:37.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.870 [2024-11-20 17:15:37.778795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.870 [2024-11-20 17:15:37.778802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.778905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.778912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1762ba0 is same with the state(6) to be set 00:21:19.871 [2024-11-20 17:15:37.779896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.779993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.779999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.871 [2024-11-20 17:15:37.780362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.871 [2024-11-20 17:15:37.780369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.872 [2024-11-20 17:15:37.780831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.872 [2024-11-20 17:15:37.780838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b0510 is same with the state(6) to be set 00:21:19.872 [2024-11-20 17:15:37.781787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:19.872 task offset: 16384 on job bdev=Nvme10n1 fails 00:21:19.872 00:21:19.872 Latency(us) 00:21:19.872 [2024-11-20T16:15:37.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.872 Job: Nvme1n1 ended in about 0.66 seconds with error 00:21:19.872 Verification LBA range: start 0x0 length 0x400 00:21:19.872 Nvme1n1 : 0.66 192.49 12.03 96.25 0.00 218411.40 15978.30 208716.56 00:21:19.873 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme2n1 ended in about 0.67 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme2n1 : 0.67 191.93 12.00 95.97 0.00 213791.05 19723.22 191739.61 00:21:19.873 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme3n1 ended in about 0.67 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme3n1 : 0.67 191.38 11.96 95.69 0.00 209243.67 15791.06 213709.78 00:21:19.873 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme4n1 ended in about 0.67 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme4n1 : 0.67 190.82 11.93 95.41 0.00 204700.69 15853.47 205720.62 00:21:19.873 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme5n1 ended in about 0.65 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme5n1 : 0.65 197.48 12.34 98.74 0.00 192009.51 4556.31 213709.78 00:21:19.873 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme6n1 ended in about 0.65 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme6n1 : 0.65 196.44 12.28 98.22 0.00 187998.84 4899.60 217704.35 00:21:19.873 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme7n1 ended in about 0.68 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme7n1 : 0.68 195.33 12.21 94.71 0.00 186944.78 16602.45 213709.78 00:21:19.873 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme8n1 ended in about 0.68 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme8n1 : 0.68 188.88 11.80 94.44 0.00 186347.85 14293.09 214708.42 00:21:19.873 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme9n1 : 0.65 196.79 12.30 0.00 0.00 258279.62 26963.38 219701.64 00:21:19.873 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.873 Job: Nvme10n1 ended in about 0.65 seconds with error 00:21:19.873 Verification LBA range: start 0x0 length 0x400 00:21:19.873 Nvme10n1 : 0.65 197.82 12.36 98.91 0.00 165962.85 4337.86 233682.65 00:21:19.873 [2024-11-20T16:15:37.916Z] =================================================================================================================== 00:21:19.873 [2024-11-20T16:15:37.916Z] Total : 1939.37 121.21 868.33 0.00 200412.05 4337.86 233682.65 00:21:19.873 [2024-11-20 17:15:37.811600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:19.873 [2024-11-20 17:15:37.811651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.811698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c030 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.812251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.812276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa81f20 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.812286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81f20 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.812504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.812515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf3c0 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.812522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf3c0 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.812529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:19.873 [2024-11-20 17:15:37.812536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:19.873 [2024-11-20 17:15:37.812545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:19.873 [2024-11-20 17:15:37.812554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:19.873 [2024-11-20 17:15:37.813117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa81f20 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.813133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf3c0 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.813176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:19.873 [2024-11-20 17:15:37.813273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:19.873 [2024-11-20 17:15:37.813279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:19.873 [2024-11-20 17:15:37.813286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:19.873 [2024-11-20 17:15:37.813293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:19.873 [2024-11-20 17:15:37.813299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:19.873 [2024-11-20 17:15:37.813305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:19.873 [2024-11-20 17:15:37.813311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:19.873 [2024-11-20 17:15:37.813344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:19.873 [2024-11-20 17:15:37.813599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.813613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x654fe0 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.813620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654fe0 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.813857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.813867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6602c0 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.813874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6602c0 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.814092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.814102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6551e0 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.814109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6551e0 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.814335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.814345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6611b0 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.814353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6611b0 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.814494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.814504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa4830 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.814511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa4830 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.814649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.814659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe140 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.814666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe140 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.814907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.814918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x575610 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.814928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x575610 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.815097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.873 [2024-11-20 17:15:37.815107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8c030 with addr=10.0.0.2, port=4420 00:21:19.873 [2024-11-20 17:15:37.815114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c030 is same with the state(6) to be set 00:21:19.873 [2024-11-20 17:15:37.815123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x654fe0 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6602c0 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6551e0 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6611b0 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4830 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe140 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x575610 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c030 (9): Bad file descriptor 00:21:19.873 [2024-11-20 17:15:37.815210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:19.873 [2024-11-20 17:15:37.815216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:19.873 [2024-11-20 17:15:37.815222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:19.874 [2024-11-20 17:15:37.815395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:19.874 [2024-11-20 17:15:37.815401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:19.874 [2024-11-20 17:15:37.815407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:19.874 [2024-11-20 17:15:37.815412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:20.133 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:21.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2554404 00:21:21.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:21.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2554404 00:21:21.510 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2554404 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.511 rmmod nvme_tcp 00:21:21.511 rmmod nvme_fabrics 00:21:21.511 rmmod nvme_keyring 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2554126 ']' 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2554126 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2554126 ']' 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2554126 00:21:21.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2554126) - No such process 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2554126 is not found' 00:21:21.511 Process with pid 2554126 is not found 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.511 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.415 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.415 00:21:23.415 real 0m7.710s 00:21:23.415 user 0m18.700s 00:21:23.415 sys 0m1.274s 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.416 ************************************ 00:21:23.416 END TEST nvmf_shutdown_tc3 00:21:23.416 ************************************ 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:23.416 ************************************ 00:21:23.416 START TEST nvmf_shutdown_tc4 00:21:23.416 ************************************ 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.416 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.416 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.416 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.417 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:21:23.676 00:21:23.676 --- 10.0.0.2 ping statistics --- 00:21:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.676 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:23.676 00:21:23.676 --- 10.0.0.1 ping statistics --- 00:21:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.676 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.676 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.935 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2555615 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2555615 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2555615 ']' 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.936 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.936 [2024-11-20 17:15:41.781862] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:23.936 [2024-11-20 17:15:41.781913] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.936 [2024-11-20 17:15:41.861454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.936 [2024-11-20 17:15:41.902627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.936 [2024-11-20 17:15:41.902665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.936 [2024-11-20 17:15:41.902672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.936 [2024-11-20 17:15:41.902678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.936 [2024-11-20 17:15:41.902683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.936 [2024-11-20 17:15:41.904190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.936 [2024-11-20 17:15:41.904241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.936 [2024-11-20 17:15:41.904346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.936 [2024-11-20 17:15:41.904347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:24.194 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 [2024-11-20 17:15:42.049091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.195 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.195 Malloc1 00:21:24.195 [2024-11-20 17:15:42.153196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.195 Malloc2 00:21:24.195 Malloc3 00:21:24.453 Malloc4 00:21:24.453 Malloc5 00:21:24.453 Malloc6 00:21:24.453 Malloc7 00:21:24.453 Malloc8 00:21:24.453 Malloc9 00:21:24.711 Malloc10 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2555724 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:24.711 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:24.711 [2024-11-20 17:15:42.652255] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2555615 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2555615 ']' 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2555615 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555615 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555615' 00:21:29.989 killing process with pid 2555615 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2555615 00:21:29.989 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2555615 00:21:29.989 [2024-11-20 17:15:47.647572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.647673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19937a0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.649443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179f6b0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.649472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179f6b0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.649480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179f6b0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.649488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179f6b0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.649494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179f6b0 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 [2024-11-20 17:15:47.651241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0070 is same with the state(6) to be set 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 [2024-11-20 17:15:47.652409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.989 starting I/O failed: -6 00:21:29.989 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.653309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.653796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.653818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.653829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 [2024-11-20 17:15:47.653836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.653843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.653849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 [2024-11-20 17:15:47.653856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2d80 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.654161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with Write completed with error (sct=0, sc=8) 00:21:29.990 the state(6) to be set 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.654185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.654197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.654214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with the state(6) to be set 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.654223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with the state(6) to be set 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 [2024-11-20 17:15:47.654233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3250 is same with the state(6) to be set 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 [2024-11-20 17:15:47.654355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.990 Write completed with error (sct=0, sc=8) 00:21:29.990 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.654677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.654697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.654705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.654711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.654718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.654724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.654730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with Write completed with error (sct=0, sc=8) 00:21:29.991 the state(6) to be set 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.654738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a3720 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.655005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.655028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.655036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.655043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.655050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.655057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a28b0 is same with the state(6) to be set 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 [2024-11-20 17:15:47.656030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:29.991 NVMe io qpair process completion error 00:21:29.991 [2024-11-20 17:15:47.661382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19915f0 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.661404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19915f0 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.661411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19915f0 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.661418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19915f0 is same with the state(6) to be set 00:21:29.991 [2024-11-20 17:15:47.661424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19915f0 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 starting I/O failed: -6 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.661895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991f90 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.661920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991f90 is same with the state(6) to be set 00:21:29.991 Write completed with error (sct=0, sc=8) 00:21:29.991 [2024-11-20 17:15:47.661928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991f90 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.661935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991f90 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.661941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991f90 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.661982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 starting I/O failed: -6 00:21:29.992 [2024-11-20 17:15:47.662010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1991120 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 [2024-11-20 17:15:47.662543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.662573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5440 is same with the state(6) to be set 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 [2024-11-20 17:15:47.662834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:29.992 [2024-11-20 17:15:47.662866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.662913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a5910 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 starting I/O failed: -6 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.663229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.992 [2024-11-20 17:15:47.663241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.992 starting I/O failed: -6 00:21:29.992 [2024-11-20 17:15:47.663247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.992 [2024-11-20 17:15:47.663257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.992 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 starting I/O failed: -6 00:21:29.993 [2024-11-20 17:15:47.663269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 starting I/O failed: -6 00:21:29.993 [2024-11-20 17:15:47.663288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1990c50 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 starting I/O failed: -6 00:21:29.993 [2024-11-20 17:15:47.663682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with starting I/O failed: -6 00:21:29.993 the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 [2024-11-20 17:15:47.663712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 [2024-11-20 17:15:47.663718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a4f70 is same with starting I/O failed: -6 00:21:29.993 the state(6) to be set 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 [2024-11-20 17:15:47.663888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.993 Write completed with error (sct=0, sc=8) 00:21:29.993 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 [2024-11-20 17:15:47.665427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.994 NVMe io qpair process completion error 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 [2024-11-20 17:15:47.666637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 [2024-11-20 17:15:47.667447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 starting I/O failed: -6 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.994 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 [2024-11-20 17:15:47.668457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.995 Write completed with error (sct=0, sc=8) 00:21:29.995 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 [2024-11-20 17:15:47.670313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:29.996 NVMe io qpair process completion error 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 [2024-11-20 17:15:47.671429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.996 starting I/O failed: -6 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 [2024-11-20 17:15:47.672241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.996 Write completed with error (sct=0, sc=8) 00:21:29.996 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 [2024-11-20 17:15:47.673270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 [2024-11-20 17:15:47.677143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:29.997 NVMe io qpair process completion error 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 [2024-11-20 17:15:47.678323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 starting I/O failed: -6 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.997 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 [2024-11-20 17:15:47.679186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 [2024-11-20 17:15:47.680193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.998 Write completed with error (sct=0, sc=8) 00:21:29.998 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 [2024-11-20 17:15:47.684007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:29.999 NVMe io qpair process completion error 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 [2024-11-20 17:15:47.685126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 [2024-11-20 17:15:47.685934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:29.999 starting I/O failed: -6 00:21:29.999 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 [2024-11-20 17:15:47.686924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 [2024-11-20 17:15:47.688969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.000 NVMe io qpair process completion error 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 Write completed with error (sct=0, sc=8) 00:21:30.000 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 [2024-11-20 17:15:47.689979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 [2024-11-20 17:15:47.690758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 [2024-11-20 17:15:47.691787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.001 Write completed with error (sct=0, sc=8) 00:21:30.001 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 [2024-11-20 17:15:47.693674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.002 NVMe io qpair process completion error 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 [2024-11-20 17:15:47.694897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 Write completed with error (sct=0, sc=8) 00:21:30.002 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 [2024-11-20 17:15:47.695761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 [2024-11-20 17:15:47.696781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.003 starting I/O failed: -6 00:21:30.003 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 [2024-11-20 17:15:47.701061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.004 NVMe io qpair process completion error 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 [2024-11-20 17:15:47.702019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.004 starting I/O failed: -6 00:21:30.004 starting I/O failed: -6 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 [2024-11-20 17:15:47.702936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.004 starting I/O failed: -6 00:21:30.004 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 [2024-11-20 17:15:47.703930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 [2024-11-20 17:15:47.707158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.005 NVMe io qpair process completion error 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.005 starting I/O failed: -6 00:21:30.005 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 [2024-11-20 17:15:47.709657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.006 Write completed with error (sct=0, sc=8) 00:21:30.006 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 Write completed with error (sct=0, sc=8) 00:21:30.007 starting I/O failed: -6 00:21:30.007 [2024-11-20 17:15:47.712150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.007 NVMe io qpair process completion error 00:21:30.007 Initializing NVMe Controllers 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:30.007 Controller IO queue size 128, less than required. 00:21:30.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:30.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:30.007 Initialization complete. Launching workers. 00:21:30.007 ======================================================== 00:21:30.007 Latency(us) 00:21:30.007 Device Information : IOPS MiB/s Average min max 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2255.32 96.91 56760.24 648.39 103560.11 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2226.99 95.69 57522.17 546.52 119690.10 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2251.54 96.75 56931.50 733.05 118473.21 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2228.88 95.77 57525.15 769.53 117491.52 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2206.42 94.81 58127.76 879.41 116378.26 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2208.10 94.88 57522.42 732.32 96939.84 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2156.68 92.67 59467.59 688.30 118469.05 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2130.87 91.56 60212.02 718.66 123462.60 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2130.45 91.54 59601.90 794.25 98630.41 00:21:30.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2237.48 96.14 56761.56 633.35 96212.15 00:21:30.007 ======================================================== 00:21:30.007 Total : 22032.73 946.72 58019.22 546.52 123462.60 00:21:30.007 00:21:30.007 [2024-11-20 17:15:47.715146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5560 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7ae0 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5ef0 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c6410 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5890 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7720 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c6a70 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c6740 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5bc0 is same with the state(6) to be set 00:21:30.007 [2024-11-20 17:15:47.715430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7900 is same with the state(6) to be set 00:21:30.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:30.266 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2555724 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2555724 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2555724 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.203 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.204 rmmod nvme_tcp 00:21:31.204 rmmod nvme_fabrics 00:21:31.204 rmmod nvme_keyring 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2555615 ']' 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2555615 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2555615 ']' 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2555615 00:21:31.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2555615) - No such process 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2555615 is not found' 00:21:31.204 Process with pid 2555615 is not found 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.204 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.740 00:21:33.740 real 0m9.819s 00:21:33.740 user 0m24.898s 00:21:33.740 sys 0m5.222s 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:33.740 ************************************ 00:21:33.740 END TEST nvmf_shutdown_tc4 00:21:33.740 ************************************ 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:33.740 00:21:33.740 real 0m41.045s 00:21:33.740 user 1m40.639s 00:21:33.740 sys 0m14.015s 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.740 ************************************ 00:21:33.740 END TEST nvmf_shutdown 00:21:33.740 ************************************ 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.740 ************************************ 00:21:33.740 START TEST nvmf_nsid 00:21:33.740 ************************************ 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:33.740 * Looking for test storage... 00:21:33.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.740 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.741 --rc genhtml_branch_coverage=1 00:21:33.741 --rc genhtml_function_coverage=1 00:21:33.741 --rc genhtml_legend=1 00:21:33.741 --rc geninfo_all_blocks=1 00:21:33.741 --rc geninfo_unexecuted_blocks=1 00:21:33.741 00:21:33.741 ' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.741 --rc genhtml_branch_coverage=1 00:21:33.741 --rc genhtml_function_coverage=1 00:21:33.741 --rc genhtml_legend=1 00:21:33.741 --rc geninfo_all_blocks=1 00:21:33.741 --rc geninfo_unexecuted_blocks=1 00:21:33.741 00:21:33.741 ' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.741 --rc genhtml_branch_coverage=1 00:21:33.741 --rc genhtml_function_coverage=1 00:21:33.741 --rc genhtml_legend=1 00:21:33.741 --rc geninfo_all_blocks=1 00:21:33.741 --rc geninfo_unexecuted_blocks=1 00:21:33.741 00:21:33.741 ' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.741 --rc genhtml_branch_coverage=1 00:21:33.741 --rc genhtml_function_coverage=1 00:21:33.741 --rc genhtml_legend=1 00:21:33.741 --rc geninfo_all_blocks=1 00:21:33.741 --rc geninfo_unexecuted_blocks=1 00:21:33.741 00:21:33.741 ' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:33.741 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.742 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.312 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.312 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.312 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.313 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.313 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:40.313 00:21:40.313 --- 10.0.0.2 ping statistics --- 00:21:40.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.313 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:40.313 00:21:40.313 --- 10.0.0.1 ping statistics --- 00:21:40.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.313 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2560270 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2560270 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2560270 ']' 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 [2024-11-20 17:15:57.518272] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:40.313 [2024-11-20 17:15:57.518328] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.313 [2024-11-20 17:15:57.598627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.313 [2024-11-20 17:15:57.638474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.313 [2024-11-20 17:15:57.638510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.313 [2024-11-20 17:15:57.638517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.313 [2024-11-20 17:15:57.638524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.313 [2024-11-20 17:15:57.638529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.313 [2024-11-20 17:15:57.639080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2560422 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ed2bcfab-59bd-4ab8-9e67-e9dee7bd5d03 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ec046d3d-77ec-4402-81a9-26e212ca9ac3 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6471e665-82b6-49ef-844c-a18ae4de5d91 00:21:40.313 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.314 null0 00:21:40.314 null1 00:21:40.314 [2024-11-20 17:15:57.833138] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:40.314 [2024-11-20 17:15:57.833192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560422 ] 00:21:40.314 null2 00:21:40.314 [2024-11-20 17:15:57.838939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.314 [2024-11-20 17:15:57.863140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2560422 /var/tmp/tgt2.sock 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2560422 ']' 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:40.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.314 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.314 [2024-11-20 17:15:57.908632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.314 [2024-11-20 17:15:57.954652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.314 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.314 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:40.314 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:40.572 [2024-11-20 17:15:58.476300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.572 [2024-11-20 17:15:58.492410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:40.572 nvme0n1 nvme0n2 00:21:40.572 nvme1n1 00:21:40.572 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:40.572 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:40.572 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:41.948 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ed2bcfab-59bd-4ab8-9e67-e9dee7bd5d03 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ed2bcfab59bd4ab89e67e9dee7bd5d03 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ED2BCFAB59BD4AB89E67E9DEE7BD5D03 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ED2BCFAB59BD4AB89E67E9DEE7BD5D03 == \E\D\2\B\C\F\A\B\5\9\B\D\4\A\B\8\9\E\6\7\E\9\D\E\E\7\B\D\5\D\0\3 ]] 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ec046d3d-77ec-4402-81a9-26e212ca9ac3 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ec046d3d77ec440281a926e212ca9ac3 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EC046D3D77EC440281A926E212CA9AC3 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ EC046D3D77EC440281A926E212CA9AC3 == \E\C\0\4\6\D\3\D\7\7\E\C\4\4\0\2\8\1\A\9\2\6\E\2\1\2\C\A\9\A\C\3 ]] 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.883 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6471e665-82b6-49ef-844c-a18ae4de5d91 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6471e66582b649ef844ca18ae4de5d91 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6471E66582B649EF844CA18AE4DE5D91 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6471E66582B649EF844CA18AE4DE5D91 == \6\4\7\1\E\6\6\5\8\2\B\6\4\9\E\F\8\4\4\C\A\1\8\A\E\4\D\E\5\D\9\1 ]] 00:21:42.884 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2560422 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2560422 ']' 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2560422 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560422 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560422' 00:21:43.142 killing process with pid 2560422 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2560422 00:21:43.142 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2560422 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.400 rmmod nvme_tcp 00:21:43.400 rmmod nvme_fabrics 00:21:43.400 rmmod nvme_keyring 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.400 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2560270 ']' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2560270 ']' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560270' 00:21:43.657 killing process with pid 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2560270 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.657 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.194 00:21:46.194 real 0m12.409s 00:21:46.194 user 0m9.686s 00:21:46.194 sys 0m5.516s 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:46.194 ************************************ 00:21:46.194 END TEST nvmf_nsid 00:21:46.194 ************************************ 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:46.194 00:21:46.194 real 11m58.814s 00:21:46.194 user 25m37.911s 00:21:46.194 sys 3m41.550s 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.194 17:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.194 ************************************ 00:21:46.194 END TEST nvmf_target_extra 00:21:46.194 ************************************ 00:21:46.194 17:16:03 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.194 17:16:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.194 17:16:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.194 17:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.194 ************************************ 00:21:46.194 START TEST nvmf_host 00:21:46.194 ************************************ 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.194 * Looking for test storage... 00:21:46.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.194 --rc genhtml_branch_coverage=1 00:21:46.194 --rc genhtml_function_coverage=1 00:21:46.194 --rc genhtml_legend=1 00:21:46.194 --rc geninfo_all_blocks=1 00:21:46.194 --rc geninfo_unexecuted_blocks=1 00:21:46.194 00:21:46.194 ' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.194 --rc genhtml_branch_coverage=1 00:21:46.194 --rc genhtml_function_coverage=1 00:21:46.194 --rc genhtml_legend=1 00:21:46.194 --rc geninfo_all_blocks=1 00:21:46.194 --rc geninfo_unexecuted_blocks=1 00:21:46.194 00:21:46.194 ' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.194 --rc genhtml_branch_coverage=1 00:21:46.194 --rc genhtml_function_coverage=1 00:21:46.194 --rc genhtml_legend=1 00:21:46.194 --rc geninfo_all_blocks=1 00:21:46.194 --rc geninfo_unexecuted_blocks=1 00:21:46.194 00:21:46.194 ' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.194 --rc genhtml_branch_coverage=1 00:21:46.194 --rc genhtml_function_coverage=1 00:21:46.194 --rc genhtml_legend=1 00:21:46.194 --rc geninfo_all_blocks=1 00:21:46.194 --rc geninfo_unexecuted_blocks=1 00:21:46.194 00:21:46.194 ' 00:21:46.194 17:16:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.194 17:16:04 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.195 ************************************ 00:21:46.195 START TEST nvmf_multicontroller 00:21:46.195 ************************************ 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.195 * Looking for test storage... 00:21:46.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.195 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.455 --rc genhtml_branch_coverage=1 00:21:46.455 --rc genhtml_function_coverage=1 00:21:46.455 --rc genhtml_legend=1 00:21:46.455 --rc geninfo_all_blocks=1 00:21:46.455 --rc geninfo_unexecuted_blocks=1 00:21:46.455 00:21:46.455 ' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.455 --rc genhtml_branch_coverage=1 00:21:46.455 --rc genhtml_function_coverage=1 00:21:46.455 --rc genhtml_legend=1 00:21:46.455 --rc geninfo_all_blocks=1 00:21:46.455 --rc geninfo_unexecuted_blocks=1 00:21:46.455 00:21:46.455 ' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.455 --rc genhtml_branch_coverage=1 00:21:46.455 --rc genhtml_function_coverage=1 00:21:46.455 --rc genhtml_legend=1 00:21:46.455 --rc geninfo_all_blocks=1 00:21:46.455 --rc geninfo_unexecuted_blocks=1 00:21:46.455 00:21:46.455 ' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.455 --rc genhtml_branch_coverage=1 00:21:46.455 --rc genhtml_function_coverage=1 00:21:46.455 --rc genhtml_legend=1 00:21:46.455 --rc geninfo_all_blocks=1 00:21:46.455 --rc geninfo_unexecuted_blocks=1 00:21:46.455 00:21:46.455 ' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.455 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.456 17:16:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.861 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.862 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.862 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.862 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.121 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.121 17:16:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:21:52.121 00:21:52.121 --- 10.0.0.2 ping statistics --- 00:21:52.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.121 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:21:52.121 00:21:52.121 --- 10.0.0.1 ping statistics --- 00:21:52.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.121 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.121 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2564530 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2564530 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2564530 ']' 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.381 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.381 [2024-11-20 17:16:10.234713] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:52.381 [2024-11-20 17:16:10.234759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.381 [2024-11-20 17:16:10.311090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.381 [2024-11-20 17:16:10.353797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.381 [2024-11-20 17:16:10.353829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.381 [2024-11-20 17:16:10.353836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.381 [2024-11-20 17:16:10.353842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.381 [2024-11-20 17:16:10.353847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.381 [2024-11-20 17:16:10.355127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.381 [2024-11-20 17:16:10.355163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.381 [2024-11-20 17:16:10.355163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.641 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 [2024-11-20 17:16:10.493082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 Malloc0 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 [2024-11-20 17:16:10.556431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 [2024-11-20 17:16:10.564356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 Malloc1 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2564766 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2564766 /var/tmp/bdevperf.sock 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2564766 ']' 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.642 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.901 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.901 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:52.901 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:52.901 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.901 17:16:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.160 NVMe0n1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.160 1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.160 request: 00:21:53.160 { 00:21:53.160 "name": "NVMe0", 00:21:53.160 "trtype": "tcp", 00:21:53.160 "traddr": "10.0.0.2", 00:21:53.160 "adrfam": "ipv4", 00:21:53.160 "trsvcid": "4420", 00:21:53.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.160 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:53.160 "hostaddr": "10.0.0.1", 00:21:53.160 "prchk_reftag": false, 00:21:53.160 "prchk_guard": false, 00:21:53.160 "hdgst": false, 00:21:53.160 "ddgst": false, 00:21:53.160 "allow_unrecognized_csi": false, 00:21:53.160 "method": "bdev_nvme_attach_controller", 00:21:53.160 "req_id": 1 00:21:53.160 } 00:21:53.160 Got JSON-RPC error response 00:21:53.160 response: 00:21:53.160 { 00:21:53.160 "code": -114, 00:21:53.160 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.160 } 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.160 request: 00:21:53.160 { 00:21:53.160 "name": "NVMe0", 00:21:53.160 "trtype": "tcp", 00:21:53.160 "traddr": "10.0.0.2", 00:21:53.160 "adrfam": "ipv4", 00:21:53.160 "trsvcid": "4420", 00:21:53.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.160 "hostaddr": "10.0.0.1", 00:21:53.160 "prchk_reftag": false, 00:21:53.160 "prchk_guard": false, 00:21:53.160 "hdgst": false, 00:21:53.160 "ddgst": false, 00:21:53.160 "allow_unrecognized_csi": false, 00:21:53.160 "method": "bdev_nvme_attach_controller", 00:21:53.160 "req_id": 1 00:21:53.160 } 00:21:53.160 Got JSON-RPC error response 00:21:53.160 response: 00:21:53.160 { 00:21:53.160 "code": -114, 00:21:53.160 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.160 } 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.160 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 request: 00:21:53.161 { 00:21:53.161 "name": "NVMe0", 00:21:53.161 "trtype": "tcp", 00:21:53.161 "traddr": "10.0.0.2", 00:21:53.161 "adrfam": "ipv4", 00:21:53.161 "trsvcid": "4420", 00:21:53.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.161 "hostaddr": "10.0.0.1", 00:21:53.161 "prchk_reftag": false, 00:21:53.161 "prchk_guard": false, 00:21:53.161 "hdgst": false, 00:21:53.161 "ddgst": false, 00:21:53.161 "multipath": "disable", 00:21:53.161 "allow_unrecognized_csi": false, 00:21:53.161 "method": "bdev_nvme_attach_controller", 00:21:53.161 "req_id": 1 00:21:53.161 } 00:21:53.161 Got JSON-RPC error response 00:21:53.161 response: 00:21:53.161 { 00:21:53.161 "code": -114, 00:21:53.161 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:53.161 } 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 request: 00:21:53.161 { 00:21:53.161 "name": "NVMe0", 00:21:53.161 "trtype": "tcp", 00:21:53.161 "traddr": "10.0.0.2", 00:21:53.161 "adrfam": "ipv4", 00:21:53.161 "trsvcid": "4420", 00:21:53.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.161 "hostaddr": "10.0.0.1", 00:21:53.161 "prchk_reftag": false, 00:21:53.161 "prchk_guard": false, 00:21:53.161 "hdgst": false, 00:21:53.161 "ddgst": false, 00:21:53.161 "multipath": "failover", 00:21:53.161 "allow_unrecognized_csi": false, 00:21:53.161 "method": "bdev_nvme_attach_controller", 00:21:53.161 "req_id": 1 00:21:53.161 } 00:21:53.161 Got JSON-RPC error response 00:21:53.161 response: 00:21:53.161 { 00:21:53.161 "code": -114, 00:21:53.161 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.161 } 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.161 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.161 NVMe0n1 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.420 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:53.420 17:16:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.799 { 00:21:54.799 "results": [ 00:21:54.799 { 00:21:54.799 "job": "NVMe0n1", 00:21:54.799 "core_mask": "0x1", 00:21:54.799 "workload": "write", 00:21:54.799 "status": "finished", 00:21:54.799 "queue_depth": 128, 00:21:54.799 "io_size": 4096, 00:21:54.799 "runtime": 1.004693, 00:21:54.799 "iops": 24916.068888705307, 00:21:54.799 "mibps": 97.32839409650511, 00:21:54.799 "io_failed": 0, 00:21:54.799 "io_timeout": 0, 00:21:54.799 "avg_latency_us": 5131.015402525809, 00:21:54.799 "min_latency_us": 3151.9695238095237, 00:21:54.799 "max_latency_us": 12545.462857142857 00:21:54.799 } 00:21:54.799 ], 00:21:54.799 "core_count": 1 00:21:54.799 } 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2564766 ']' 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564766' 00:21:54.799 killing process with pid 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2564766 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:54.799 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:54.799 [2024-11-20 17:16:10.670346] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:21:54.799 [2024-11-20 17:16:10.670397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564766 ] 00:21:54.799 [2024-11-20 17:16:10.745128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.799 [2024-11-20 17:16:10.785844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.799 [2024-11-20 17:16:11.289034] bdev.c:4912:bdev_name_add: *ERROR*: Bdev name eb59b9cd-004c-421c-bbe4-0a6d291ae067 already exists 00:21:54.799 [2024-11-20 17:16:11.289061] bdev.c:8112:bdev_register: *ERROR*: Unable to add uuid:eb59b9cd-004c-421c-bbe4-0a6d291ae067 alias for bdev NVMe1n1 00:21:54.799 [2024-11-20 17:16:11.289069] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:54.799 Running I/O for 1 seconds... 00:21:54.799 24905.00 IOPS, 97.29 MiB/s 00:21:54.799 Latency(us) 00:21:54.799 [2024-11-20T16:16:12.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.799 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:54.799 NVMe0n1 : 1.00 24916.07 97.33 0.00 0.00 5131.02 3151.97 12545.46 00:21:54.799 [2024-11-20T16:16:12.842Z] =================================================================================================================== 00:21:54.799 [2024-11-20T16:16:12.842Z] Total : 24916.07 97.33 0.00 0.00 5131.02 3151.97 12545.46 00:21:54.799 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.799 00:21:54.799 Latency(us) 00:21:54.799 [2024-11-20T16:16:12.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.799 [2024-11-20T16:16:12.842Z] =================================================================================================================== 00:21:54.799 [2024-11-20T16:16:12.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.799 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.799 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.799 rmmod nvme_tcp 00:21:54.799 rmmod nvme_fabrics 00:21:54.799 rmmod nvme_keyring 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2564530 ']' 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2564530 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2564530 ']' 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2564530 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564530 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564530' 00:21:54.800 killing process with pid 2564530 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2564530 00:21:54.800 17:16:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2564530 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.059 17:16:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.594 00:21:57.594 real 0m11.025s 00:21:57.594 user 0m11.773s 00:21:57.594 sys 0m5.225s 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.594 ************************************ 00:21:57.594 END TEST nvmf_multicontroller 00:21:57.594 ************************************ 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.594 ************************************ 00:21:57.594 START TEST nvmf_aer 00:21:57.594 ************************************ 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.594 * Looking for test storage... 00:21:57.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:57.594 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:57.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.595 --rc genhtml_branch_coverage=1 00:21:57.595 --rc genhtml_function_coverage=1 00:21:57.595 --rc genhtml_legend=1 00:21:57.595 --rc geninfo_all_blocks=1 00:21:57.595 --rc geninfo_unexecuted_blocks=1 00:21:57.595 00:21:57.595 ' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:57.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.595 --rc genhtml_branch_coverage=1 00:21:57.595 --rc genhtml_function_coverage=1 00:21:57.595 --rc genhtml_legend=1 00:21:57.595 --rc geninfo_all_blocks=1 00:21:57.595 --rc geninfo_unexecuted_blocks=1 00:21:57.595 00:21:57.595 ' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:57.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.595 --rc genhtml_branch_coverage=1 00:21:57.595 --rc genhtml_function_coverage=1 00:21:57.595 --rc genhtml_legend=1 00:21:57.595 --rc geninfo_all_blocks=1 00:21:57.595 --rc geninfo_unexecuted_blocks=1 00:21:57.595 00:21:57.595 ' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:57.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.595 --rc genhtml_branch_coverage=1 00:21:57.595 --rc genhtml_function_coverage=1 00:21:57.595 --rc genhtml_legend=1 00:21:57.595 --rc geninfo_all_blocks=1 00:21:57.595 --rc geninfo_unexecuted_blocks=1 00:21:57.595 00:21:57.595 ' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.595 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.596 17:16:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.167 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.167 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.167 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.168 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.168 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:22:04.168 00:22:04.168 --- 10.0.0.2 ping statistics --- 00:22:04.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.168 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:04.168 00:22:04.168 --- 10.0.0.1 ping statistics --- 00:22:04.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.168 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.168 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2568543 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2568543 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2568543 ']' 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.169 17:16:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 [2024-11-20 17:16:21.378941] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:04.169 [2024-11-20 17:16:21.378982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.169 [2024-11-20 17:16:21.455110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.169 [2024-11-20 17:16:21.497412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.169 [2024-11-20 17:16:21.497448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.169 [2024-11-20 17:16:21.497455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.169 [2024-11-20 17:16:21.497461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.169 [2024-11-20 17:16:21.497467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.169 [2024-11-20 17:16:21.498937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.169 [2024-11-20 17:16:21.499050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.169 [2024-11-20 17:16:21.499083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.169 [2024-11-20 17:16:21.499083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 [2024-11-20 17:16:22.263825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 Malloc0 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 [2024-11-20 17:16:22.325089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 [ 00:22:04.428 { 00:22:04.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.428 "subtype": "Discovery", 00:22:04.428 "listen_addresses": [], 00:22:04.428 "allow_any_host": true, 00:22:04.428 "hosts": [] 00:22:04.428 }, 00:22:04.428 { 00:22:04.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.428 "subtype": "NVMe", 00:22:04.428 "listen_addresses": [ 00:22:04.428 { 00:22:04.428 "trtype": "TCP", 00:22:04.428 "adrfam": "IPv4", 00:22:04.428 "traddr": "10.0.0.2", 00:22:04.428 "trsvcid": "4420" 00:22:04.428 } 00:22:04.428 ], 00:22:04.428 "allow_any_host": true, 00:22:04.428 "hosts": [], 00:22:04.428 "serial_number": "SPDK00000000000001", 00:22:04.428 "model_number": "SPDK bdev Controller", 00:22:04.428 "max_namespaces": 2, 00:22:04.428 "min_cntlid": 1, 00:22:04.428 "max_cntlid": 65519, 00:22:04.428 "namespaces": [ 00:22:04.428 { 00:22:04.428 "nsid": 1, 00:22:04.428 "bdev_name": "Malloc0", 00:22:04.428 "name": "Malloc0", 00:22:04.428 "nguid": "5F2D55190DE2451EAF7AEA099E60F18B", 00:22:04.428 "uuid": "5f2d5519-0de2-451e-af7a-ea099e60f18b" 00:22:04.428 } 00:22:04.428 ] 00:22:04.428 } 00:22:04.428 ] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2568791 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:04.428 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 Malloc1 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.687 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 Asynchronous Event Request test 00:22:04.687 Attaching to 10.0.0.2 00:22:04.687 Attached to 10.0.0.2 00:22:04.687 Registering asynchronous event callbacks... 00:22:04.687 Starting namespace attribute notice tests for all controllers... 00:22:04.687 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:04.687 aer_cb - Changed Namespace 00:22:04.687 Cleaning up... 00:22:04.687 [ 00:22:04.687 { 00:22:04.687 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.687 "subtype": "Discovery", 00:22:04.687 "listen_addresses": [], 00:22:04.687 "allow_any_host": true, 00:22:04.687 "hosts": [] 00:22:04.687 }, 00:22:04.687 { 00:22:04.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.687 "subtype": "NVMe", 00:22:04.687 "listen_addresses": [ 00:22:04.687 { 00:22:04.687 "trtype": "TCP", 00:22:04.687 "adrfam": "IPv4", 00:22:04.687 "traddr": "10.0.0.2", 00:22:04.687 "trsvcid": "4420" 00:22:04.687 } 00:22:04.687 ], 00:22:04.688 "allow_any_host": true, 00:22:04.688 "hosts": [], 00:22:04.688 "serial_number": "SPDK00000000000001", 00:22:04.688 "model_number": "SPDK bdev Controller", 00:22:04.688 "max_namespaces": 2, 00:22:04.688 "min_cntlid": 1, 00:22:04.688 "max_cntlid": 65519, 00:22:04.688 "namespaces": [ 00:22:04.688 { 00:22:04.688 "nsid": 1, 00:22:04.688 "bdev_name": "Malloc0", 00:22:04.688 "name": "Malloc0", 00:22:04.688 "nguid": "5F2D55190DE2451EAF7AEA099E60F18B", 00:22:04.688 "uuid": "5f2d5519-0de2-451e-af7a-ea099e60f18b" 00:22:04.688 }, 00:22:04.688 { 00:22:04.688 "nsid": 2, 00:22:04.688 "bdev_name": "Malloc1", 00:22:04.688 "name": "Malloc1", 00:22:04.688 "nguid": "6355E7506BCD464793D0655CE3F41612", 00:22:04.688 "uuid": "6355e750-6bcd-4647-93d0-655ce3f41612" 00:22:04.688 } 00:22:04.688 ] 00:22:04.688 } 00:22:04.688 ] 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2568791 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.688 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.688 rmmod nvme_tcp 00:22:04.688 rmmod nvme_fabrics 00:22:04.688 rmmod nvme_keyring 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2568543 ']' 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2568543 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2568543 ']' 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2568543 00:22:04.947 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568543 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568543' 00:22:04.948 killing process with pid 2568543 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2568543 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2568543 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.948 17:16:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.485 00:22:07.485 real 0m9.872s 00:22:07.485 user 0m7.782s 00:22:07.485 sys 0m4.862s 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:07.485 ************************************ 00:22:07.485 END TEST nvmf_aer 00:22:07.485 ************************************ 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.485 ************************************ 00:22:07.485 START TEST nvmf_async_init 00:22:07.485 ************************************ 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:07.485 * Looking for test storage... 00:22:07.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:07.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.485 --rc genhtml_branch_coverage=1 00:22:07.485 --rc genhtml_function_coverage=1 00:22:07.485 --rc genhtml_legend=1 00:22:07.485 --rc geninfo_all_blocks=1 00:22:07.485 --rc geninfo_unexecuted_blocks=1 00:22:07.485 00:22:07.485 ' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:07.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.485 --rc genhtml_branch_coverage=1 00:22:07.485 --rc genhtml_function_coverage=1 00:22:07.485 --rc genhtml_legend=1 00:22:07.485 --rc geninfo_all_blocks=1 00:22:07.485 --rc geninfo_unexecuted_blocks=1 00:22:07.485 00:22:07.485 ' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:07.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.485 --rc genhtml_branch_coverage=1 00:22:07.485 --rc genhtml_function_coverage=1 00:22:07.485 --rc genhtml_legend=1 00:22:07.485 --rc geninfo_all_blocks=1 00:22:07.485 --rc geninfo_unexecuted_blocks=1 00:22:07.485 00:22:07.485 ' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:07.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.485 --rc genhtml_branch_coverage=1 00:22:07.485 --rc genhtml_function_coverage=1 00:22:07.485 --rc genhtml_legend=1 00:22:07.485 --rc geninfo_all_blocks=1 00:22:07.485 --rc geninfo_unexecuted_blocks=1 00:22:07.485 00:22:07.485 ' 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.485 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f6c965cef91a43458092ec84cd8606a7 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.486 17:16:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.055 17:16:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.055 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.055 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.056 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.056 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:22:14.056 00:22:14.056 --- 10.0.0.2 ping statistics --- 00:22:14.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.056 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:22:14.056 00:22:14.056 --- 10.0.0.1 ping statistics --- 00:22:14.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.056 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2572319 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2572319 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2572319 ']' 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 [2024-11-20 17:16:31.354337] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:14.056 [2024-11-20 17:16:31.354381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.056 [2024-11-20 17:16:31.433832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.056 [2024-11-20 17:16:31.472588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.056 [2024-11-20 17:16:31.472621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.056 [2024-11-20 17:16:31.472627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.056 [2024-11-20 17:16:31.472633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.056 [2024-11-20 17:16:31.472638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.056 [2024-11-20 17:16:31.473208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 [2024-11-20 17:16:31.617453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 null0 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.056 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f6c965cef91a43458092ec84cd8606a7 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 [2024-11-20 17:16:31.669736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 nvme0n1 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 [ 00:22:14.057 { 00:22:14.057 "name": "nvme0n1", 00:22:14.057 "aliases": [ 00:22:14.057 "f6c965ce-f91a-4345-8092-ec84cd8606a7" 00:22:14.057 ], 00:22:14.057 "product_name": "NVMe disk", 00:22:14.057 "block_size": 512, 00:22:14.057 "num_blocks": 2097152, 00:22:14.057 "uuid": "f6c965ce-f91a-4345-8092-ec84cd8606a7", 00:22:14.057 "numa_id": 1, 00:22:14.057 "assigned_rate_limits": { 00:22:14.057 "rw_ios_per_sec": 0, 00:22:14.057 "rw_mbytes_per_sec": 0, 00:22:14.057 "r_mbytes_per_sec": 0, 00:22:14.057 "w_mbytes_per_sec": 0 00:22:14.057 }, 00:22:14.057 "claimed": false, 00:22:14.057 "zoned": false, 00:22:14.057 "supported_io_types": { 00:22:14.057 "read": true, 00:22:14.057 "write": true, 00:22:14.057 "unmap": false, 00:22:14.057 "flush": true, 00:22:14.057 "reset": true, 00:22:14.057 "nvme_admin": true, 00:22:14.057 "nvme_io": true, 00:22:14.057 "nvme_io_md": false, 00:22:14.057 "write_zeroes": true, 00:22:14.057 "zcopy": false, 00:22:14.057 "get_zone_info": false, 00:22:14.057 "zone_management": false, 00:22:14.057 "zone_append": false, 00:22:14.057 "compare": true, 00:22:14.057 "compare_and_write": true, 00:22:14.057 "abort": true, 00:22:14.057 "seek_hole": false, 00:22:14.057 "seek_data": false, 00:22:14.057 "copy": true, 00:22:14.057 "nvme_iov_md": false 00:22:14.057 }, 00:22:14.057 "memory_domains": [ 00:22:14.057 { 00:22:14.057 "dma_device_id": "system", 00:22:14.057 "dma_device_type": 1 00:22:14.057 } 00:22:14.057 ], 00:22:14.057 "driver_specific": { 00:22:14.057 "nvme": [ 00:22:14.057 { 00:22:14.057 "trid": { 00:22:14.057 "trtype": "TCP", 00:22:14.057 "adrfam": "IPv4", 00:22:14.057 "traddr": "10.0.0.2", 00:22:14.057 "trsvcid": "4420", 00:22:14.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.057 }, 00:22:14.057 "ctrlr_data": { 00:22:14.057 "cntlid": 1, 00:22:14.057 "vendor_id": "0x8086", 00:22:14.057 "model_number": "SPDK bdev Controller", 00:22:14.057 "serial_number": "00000000000000000000", 00:22:14.057 "firmware_revision": "25.01", 00:22:14.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.057 "oacs": { 00:22:14.057 "security": 0, 00:22:14.057 "format": 0, 00:22:14.057 "firmware": 0, 00:22:14.057 "ns_manage": 0 00:22:14.057 }, 00:22:14.057 "multi_ctrlr": true, 00:22:14.057 "ana_reporting": false 00:22:14.057 }, 00:22:14.057 "vs": { 00:22:14.057 "nvme_version": "1.3" 00:22:14.057 }, 00:22:14.057 "ns_data": { 00:22:14.057 "id": 1, 00:22:14.057 "can_share": true 00:22:14.057 } 00:22:14.057 } 00:22:14.057 ], 00:22:14.057 "mp_policy": "active_passive" 00:22:14.057 } 00:22:14.057 } 00:22:14.057 ] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 [2024-11-20 17:16:31.938264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:14.057 [2024-11-20 17:16:31.938322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe67e10 (9): Bad file descriptor 00:22:14.057 [2024-11-20 17:16:32.070285] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 [ 00:22:14.057 { 00:22:14.057 "name": "nvme0n1", 00:22:14.057 "aliases": [ 00:22:14.057 "f6c965ce-f91a-4345-8092-ec84cd8606a7" 00:22:14.057 ], 00:22:14.057 "product_name": "NVMe disk", 00:22:14.057 "block_size": 512, 00:22:14.057 "num_blocks": 2097152, 00:22:14.057 "uuid": "f6c965ce-f91a-4345-8092-ec84cd8606a7", 00:22:14.057 "numa_id": 1, 00:22:14.057 "assigned_rate_limits": { 00:22:14.057 "rw_ios_per_sec": 0, 00:22:14.057 "rw_mbytes_per_sec": 0, 00:22:14.057 "r_mbytes_per_sec": 0, 00:22:14.057 "w_mbytes_per_sec": 0 00:22:14.057 }, 00:22:14.057 "claimed": false, 00:22:14.057 "zoned": false, 00:22:14.057 "supported_io_types": { 00:22:14.057 "read": true, 00:22:14.057 "write": true, 00:22:14.057 "unmap": false, 00:22:14.057 "flush": true, 00:22:14.057 "reset": true, 00:22:14.057 "nvme_admin": true, 00:22:14.057 "nvme_io": true, 00:22:14.057 "nvme_io_md": false, 00:22:14.057 "write_zeroes": true, 00:22:14.057 "zcopy": false, 00:22:14.057 "get_zone_info": false, 00:22:14.057 "zone_management": false, 00:22:14.057 "zone_append": false, 00:22:14.057 "compare": true, 00:22:14.057 "compare_and_write": true, 00:22:14.057 "abort": true, 00:22:14.057 "seek_hole": false, 00:22:14.057 "seek_data": false, 00:22:14.057 "copy": true, 00:22:14.057 "nvme_iov_md": false 00:22:14.057 }, 00:22:14.057 "memory_domains": [ 00:22:14.057 { 00:22:14.057 "dma_device_id": "system", 00:22:14.057 "dma_device_type": 1 00:22:14.057 } 00:22:14.057 ], 00:22:14.057 "driver_specific": { 00:22:14.057 "nvme": [ 00:22:14.057 { 00:22:14.057 "trid": { 00:22:14.057 "trtype": "TCP", 00:22:14.057 "adrfam": "IPv4", 00:22:14.057 "traddr": "10.0.0.2", 00:22:14.057 "trsvcid": "4420", 00:22:14.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.057 }, 00:22:14.057 "ctrlr_data": { 00:22:14.057 "cntlid": 2, 00:22:14.057 "vendor_id": "0x8086", 00:22:14.057 "model_number": "SPDK bdev Controller", 00:22:14.057 "serial_number": "00000000000000000000", 00:22:14.057 "firmware_revision": "25.01", 00:22:14.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.057 "oacs": { 00:22:14.057 "security": 0, 00:22:14.057 "format": 0, 00:22:14.057 "firmware": 0, 00:22:14.057 "ns_manage": 0 00:22:14.057 }, 00:22:14.057 "multi_ctrlr": true, 00:22:14.057 "ana_reporting": false 00:22:14.057 }, 00:22:14.057 "vs": { 00:22:14.057 "nvme_version": "1.3" 00:22:14.057 }, 00:22:14.057 "ns_data": { 00:22:14.057 "id": 1, 00:22:14.057 "can_share": true 00:22:14.057 } 00:22:14.057 } 00:22:14.057 ], 00:22:14.057 "mp_policy": "active_passive" 00:22:14.057 } 00:22:14.057 } 00:22:14.057 ] 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.057 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JDM9F1syQb 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JDM9F1syQb 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.JDM9F1syQb 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 [2024-11-20 17:16:32.142887] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.317 [2024-11-20 17:16:32.142991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 [2024-11-20 17:16:32.162956] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.317 nvme0n1 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 [ 00:22:14.317 { 00:22:14.317 "name": "nvme0n1", 00:22:14.317 "aliases": [ 00:22:14.317 "f6c965ce-f91a-4345-8092-ec84cd8606a7" 00:22:14.317 ], 00:22:14.317 "product_name": "NVMe disk", 00:22:14.317 "block_size": 512, 00:22:14.317 "num_blocks": 2097152, 00:22:14.317 "uuid": "f6c965ce-f91a-4345-8092-ec84cd8606a7", 00:22:14.317 "numa_id": 1, 00:22:14.317 "assigned_rate_limits": { 00:22:14.317 "rw_ios_per_sec": 0, 00:22:14.317 "rw_mbytes_per_sec": 0, 00:22:14.317 "r_mbytes_per_sec": 0, 00:22:14.317 "w_mbytes_per_sec": 0 00:22:14.317 }, 00:22:14.317 "claimed": false, 00:22:14.317 "zoned": false, 00:22:14.317 "supported_io_types": { 00:22:14.317 "read": true, 00:22:14.317 "write": true, 00:22:14.317 "unmap": false, 00:22:14.317 "flush": true, 00:22:14.317 "reset": true, 00:22:14.317 "nvme_admin": true, 00:22:14.317 "nvme_io": true, 00:22:14.317 "nvme_io_md": false, 00:22:14.317 "write_zeroes": true, 00:22:14.317 "zcopy": false, 00:22:14.317 "get_zone_info": false, 00:22:14.317 "zone_management": false, 00:22:14.317 "zone_append": false, 00:22:14.317 "compare": true, 00:22:14.317 "compare_and_write": true, 00:22:14.317 "abort": true, 00:22:14.317 "seek_hole": false, 00:22:14.317 "seek_data": false, 00:22:14.317 "copy": true, 00:22:14.317 "nvme_iov_md": false 00:22:14.317 }, 00:22:14.317 "memory_domains": [ 00:22:14.317 { 00:22:14.317 "dma_device_id": "system", 00:22:14.317 "dma_device_type": 1 00:22:14.317 } 00:22:14.317 ], 00:22:14.317 "driver_specific": { 00:22:14.317 "nvme": [ 00:22:14.317 { 00:22:14.317 "trid": { 00:22:14.317 "trtype": "TCP", 00:22:14.317 "adrfam": "IPv4", 00:22:14.317 "traddr": "10.0.0.2", 00:22:14.317 "trsvcid": "4421", 00:22:14.317 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.317 }, 00:22:14.317 "ctrlr_data": { 00:22:14.317 "cntlid": 3, 00:22:14.317 "vendor_id": "0x8086", 00:22:14.317 "model_number": "SPDK bdev Controller", 00:22:14.317 "serial_number": "00000000000000000000", 00:22:14.317 "firmware_revision": "25.01", 00:22:14.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.317 "oacs": { 00:22:14.317 "security": 0, 00:22:14.317 "format": 0, 00:22:14.317 "firmware": 0, 00:22:14.317 "ns_manage": 0 00:22:14.317 }, 00:22:14.317 "multi_ctrlr": true, 00:22:14.317 "ana_reporting": false 00:22:14.317 }, 00:22:14.317 "vs": { 00:22:14.317 "nvme_version": "1.3" 00:22:14.317 }, 00:22:14.317 "ns_data": { 00:22:14.317 "id": 1, 00:22:14.317 "can_share": true 00:22:14.317 } 00:22:14.317 } 00:22:14.317 ], 00:22:14.317 "mp_policy": "active_passive" 00:22:14.317 } 00:22:14.317 } 00:22:14.317 ] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.JDM9F1syQb 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.317 rmmod nvme_tcp 00:22:14.317 rmmod nvme_fabrics 00:22:14.317 rmmod nvme_keyring 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2572319 ']' 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2572319 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2572319 ']' 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2572319 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.317 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572319 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572319' 00:22:14.576 killing process with pid 2572319 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2572319 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2572319 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.576 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.577 17:16:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.110 17:16:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.110 00:22:17.110 real 0m9.510s 00:22:17.110 user 0m3.113s 00:22:17.110 sys 0m4.829s 00:22:17.110 17:16:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.110 17:16:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.110 ************************************ 00:22:17.110 END TEST nvmf_async_init 00:22:17.110 ************************************ 00:22:17.110 17:16:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 ************************************ 00:22:17.111 START TEST dma 00:22:17.111 ************************************ 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.111 * Looking for test storage... 00:22:17.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.111 --rc genhtml_branch_coverage=1 00:22:17.111 --rc genhtml_function_coverage=1 00:22:17.111 --rc genhtml_legend=1 00:22:17.111 --rc geninfo_all_blocks=1 00:22:17.111 --rc geninfo_unexecuted_blocks=1 00:22:17.111 00:22:17.111 ' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:17.111 00:22:17.111 real 0m0.204s 00:22:17.111 user 0m0.122s 00:22:17.111 sys 0m0.096s 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.111 17:16:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:17.111 ************************************ 00:22:17.112 END TEST dma 00:22:17.112 ************************************ 00:22:17.112 17:16:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.112 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.112 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.112 17:16:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.112 ************************************ 00:22:17.112 START TEST nvmf_identify 00:22:17.112 ************************************ 00:22:17.112 17:16:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.112 * Looking for test storage... 00:22:17.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.112 --rc genhtml_branch_coverage=1 00:22:17.112 --rc genhtml_function_coverage=1 00:22:17.112 --rc genhtml_legend=1 00:22:17.112 --rc geninfo_all_blocks=1 00:22:17.112 --rc geninfo_unexecuted_blocks=1 00:22:17.112 00:22:17.112 ' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.112 --rc genhtml_branch_coverage=1 00:22:17.112 --rc genhtml_function_coverage=1 00:22:17.112 --rc genhtml_legend=1 00:22:17.112 --rc geninfo_all_blocks=1 00:22:17.112 --rc geninfo_unexecuted_blocks=1 00:22:17.112 00:22:17.112 ' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.112 --rc genhtml_branch_coverage=1 00:22:17.112 --rc genhtml_function_coverage=1 00:22:17.112 --rc genhtml_legend=1 00:22:17.112 --rc geninfo_all_blocks=1 00:22:17.112 --rc geninfo_unexecuted_blocks=1 00:22:17.112 00:22:17.112 ' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.112 --rc genhtml_branch_coverage=1 00:22:17.112 --rc genhtml_function_coverage=1 00:22:17.112 --rc genhtml_legend=1 00:22:17.112 --rc geninfo_all_blocks=1 00:22:17.112 --rc geninfo_unexecuted_blocks=1 00:22:17.112 00:22:17.112 ' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.112 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.372 17:16:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.954 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.955 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.955 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.955 17:16:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:22:23.955 00:22:23.955 --- 10.0.0.2 ping statistics --- 00:22:23.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.955 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:23.955 00:22:23.955 --- 10.0.0.1 ping statistics --- 00:22:23.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.955 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2576137 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2576137 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2576137 ']' 00:22:23.955 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 [2024-11-20 17:16:41.130665] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:23.956 [2024-11-20 17:16:41.130714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.956 [2024-11-20 17:16:41.208733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.956 [2024-11-20 17:16:41.252724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.956 [2024-11-20 17:16:41.252761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.956 [2024-11-20 17:16:41.252768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.956 [2024-11-20 17:16:41.252774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.956 [2024-11-20 17:16:41.252779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.956 [2024-11-20 17:16:41.254310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.956 [2024-11-20 17:16:41.254416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.956 [2024-11-20 17:16:41.254450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.956 [2024-11-20 17:16:41.254450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 [2024-11-20 17:16:41.356665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 Malloc0 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 [2024-11-20 17:16:41.457670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 [ 00:22:23.956 { 00:22:23.956 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.956 "subtype": "Discovery", 00:22:23.956 "listen_addresses": [ 00:22:23.956 { 00:22:23.956 "trtype": "TCP", 00:22:23.956 "adrfam": "IPv4", 00:22:23.956 "traddr": "10.0.0.2", 00:22:23.956 "trsvcid": "4420" 00:22:23.956 } 00:22:23.956 ], 00:22:23.956 "allow_any_host": true, 00:22:23.956 "hosts": [] 00:22:23.956 }, 00:22:23.956 { 00:22:23.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.956 "subtype": "NVMe", 00:22:23.956 "listen_addresses": [ 00:22:23.956 { 00:22:23.956 "trtype": "TCP", 00:22:23.956 "adrfam": "IPv4", 00:22:23.956 "traddr": "10.0.0.2", 00:22:23.956 "trsvcid": "4420" 00:22:23.956 } 00:22:23.956 ], 00:22:23.956 "allow_any_host": true, 00:22:23.956 "hosts": [], 00:22:23.956 "serial_number": "SPDK00000000000001", 00:22:23.956 "model_number": "SPDK bdev Controller", 00:22:23.956 "max_namespaces": 32, 00:22:23.956 "min_cntlid": 1, 00:22:23.956 "max_cntlid": 65519, 00:22:23.956 "namespaces": [ 00:22:23.956 { 00:22:23.956 "nsid": 1, 00:22:23.956 "bdev_name": "Malloc0", 00:22:23.956 "name": "Malloc0", 00:22:23.956 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:23.956 "eui64": "ABCDEF0123456789", 00:22:23.956 "uuid": "044e6c5d-2cfe-4a11-8576-da1ce193af84" 00:22:23.956 } 00:22:23.956 ] 00:22:23.956 } 00:22:23.956 ] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.956 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:23.956 [2024-11-20 17:16:41.511384] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:23.956 [2024-11-20 17:16:41.511418] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576168 ] 00:22:23.956 [2024-11-20 17:16:41.549715] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:23.956 [2024-11-20 17:16:41.549761] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.956 [2024-11-20 17:16:41.549766] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.956 [2024-11-20 17:16:41.549778] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.956 [2024-11-20 17:16:41.549788] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.956 [2024-11-20 17:16:41.553512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:23.956 [2024-11-20 17:16:41.553545] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x906690 0 00:22:23.956 [2024-11-20 17:16:41.560242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.956 [2024-11-20 17:16:41.560257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.956 [2024-11-20 17:16:41.560262] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.956 [2024-11-20 17:16:41.560264] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.956 [2024-11-20 17:16:41.560295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.956 [2024-11-20 17:16:41.560300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.956 [2024-11-20 17:16:41.560303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.956 [2024-11-20 17:16:41.560315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.956 [2024-11-20 17:16:41.560330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.956 [2024-11-20 17:16:41.568210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.956 [2024-11-20 17:16:41.568219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.956 [2024-11-20 17:16:41.568222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.956 [2024-11-20 17:16:41.568226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.956 [2024-11-20 17:16:41.568236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.956 [2024-11-20 17:16:41.568242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:23.956 [2024-11-20 17:16:41.568247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:23.956 [2024-11-20 17:16:41.568259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.956 [2024-11-20 17:16:41.568262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.956 [2024-11-20 17:16:41.568265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.956 [2024-11-20 17:16:41.568272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.956 [2024-11-20 17:16:41.568284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.956 [2024-11-20 17:16:41.568442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.568448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.568451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.568462] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:23.957 [2024-11-20 17:16:41.568469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:23.957 [2024-11-20 17:16:41.568475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.568487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.568497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.568560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.568565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.568568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.568577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:23.957 [2024-11-20 17:16:41.568583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.568589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.568601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.568611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.568679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.568685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.568688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.568696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.568704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.568716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.568725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.568796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.568802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.568805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.568812] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.957 [2024-11-20 17:16:41.568818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.568825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.568932] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:23.957 [2024-11-20 17:16:41.568936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.568943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.568950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.568955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.568965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.569025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.569030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.569033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.569040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.957 [2024-11-20 17:16:41.569048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.569060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.569069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.569146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.569151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.569154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.569161] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.957 [2024-11-20 17:16:41.569166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.957 [2024-11-20 17:16:41.569173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:23.957 [2024-11-20 17:16:41.569184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.957 [2024-11-20 17:16:41.569191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.569200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.957 [2024-11-20 17:16:41.569216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.569316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.957 [2024-11-20 17:16:41.569322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.957 [2024-11-20 17:16:41.569325] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569329] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x906690): datao=0, datal=4096, cccid=0 00:22:23.957 [2024-11-20 17:16:41.569333] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x968100) on tqpair(0x906690): expected_datao=0, payload_size=4096 00:22:23.957 [2024-11-20 17:16:41.569337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569343] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569347] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.569369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.569372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.957 [2024-11-20 17:16:41.569382] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:23.957 [2024-11-20 17:16:41.569386] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:23.957 [2024-11-20 17:16:41.569390] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:23.957 [2024-11-20 17:16:41.569398] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:23.957 [2024-11-20 17:16:41.569403] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:23.957 [2024-11-20 17:16:41.569407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:23.957 [2024-11-20 17:16:41.569416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.957 [2024-11-20 17:16:41.569423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.957 [2024-11-20 17:16:41.569435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.957 [2024-11-20 17:16:41.569445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.957 [2024-11-20 17:16:41.569512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.957 [2024-11-20 17:16:41.569518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.957 [2024-11-20 17:16:41.569521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.957 [2024-11-20 17:16:41.569524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.958 [2024-11-20 17:16:41.569530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.958 [2024-11-20 17:16:41.569547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.958 [2024-11-20 17:16:41.569566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.958 [2024-11-20 17:16:41.569582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.958 [2024-11-20 17:16:41.569597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.958 [2024-11-20 17:16:41.569605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.958 [2024-11-20 17:16:41.569610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.958 [2024-11-20 17:16:41.569630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968100, cid 0, qid 0 00:22:23.958 [2024-11-20 17:16:41.569634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968280, cid 1, qid 0 00:22:23.958 [2024-11-20 17:16:41.569638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968400, cid 2, qid 0 00:22:23.958 [2024-11-20 17:16:41.569642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.958 [2024-11-20 17:16:41.569646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968700, cid 4, qid 0 00:22:23.958 [2024-11-20 17:16:41.569744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.958 [2024-11-20 17:16:41.569750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.958 [2024-11-20 17:16:41.569753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968700) on tqpair=0x906690 00:22:23.958 [2024-11-20 17:16:41.569762] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:23.958 [2024-11-20 17:16:41.569767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:23.958 [2024-11-20 17:16:41.569776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.569785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.958 [2024-11-20 17:16:41.569795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968700, cid 4, qid 0 00:22:23.958 [2024-11-20 17:16:41.569877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.958 [2024-11-20 17:16:41.569883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.958 [2024-11-20 17:16:41.569886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569892] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x906690): datao=0, datal=4096, cccid=4 00:22:23.958 [2024-11-20 17:16:41.569895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x968700) on tqpair(0x906690): expected_datao=0, payload_size=4096 00:22:23.958 [2024-11-20 17:16:41.569899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569910] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.569913] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.958 [2024-11-20 17:16:41.610328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.958 [2024-11-20 17:16:41.610331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968700) on tqpair=0x906690 00:22:23.958 [2024-11-20 17:16:41.610348] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:23.958 [2024-11-20 17:16:41.610369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.610381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.958 [2024-11-20 17:16:41.610387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x906690) 00:22:23.958 [2024-11-20 17:16:41.610398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.958 [2024-11-20 17:16:41.610414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968700, cid 4, qid 0 00:22:23.958 [2024-11-20 17:16:41.610419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968880, cid 5, qid 0 00:22:23.958 [2024-11-20 17:16:41.610521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.958 [2024-11-20 17:16:41.610527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.958 [2024-11-20 17:16:41.610530] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610533] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x906690): datao=0, datal=1024, cccid=4 00:22:23.958 [2024-11-20 17:16:41.610537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x968700) on tqpair(0x906690): expected_datao=0, payload_size=1024 00:22:23.958 [2024-11-20 17:16:41.610541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610547] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610550] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.958 [2024-11-20 17:16:41.610560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.958 [2024-11-20 17:16:41.610562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.610566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968880) on tqpair=0x906690 00:22:23.958 [2024-11-20 17:16:41.655210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.958 [2024-11-20 17:16:41.655222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.958 [2024-11-20 17:16:41.655225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.958 [2024-11-20 17:16:41.655229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968700) on tqpair=0x906690 00:22:23.958 [2024-11-20 17:16:41.655240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x906690) 00:22:23.959 [2024-11-20 17:16:41.655254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.959 [2024-11-20 17:16:41.655271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968700, cid 4, qid 0 00:22:23.959 [2024-11-20 17:16:41.655421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.959 [2024-11-20 17:16:41.655427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.959 [2024-11-20 17:16:41.655430] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655433] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x906690): datao=0, datal=3072, cccid=4 00:22:23.959 [2024-11-20 17:16:41.655437] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x968700) on tqpair(0x906690): expected_datao=0, payload_size=3072 00:22:23.959 [2024-11-20 17:16:41.655441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655460] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655464] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.959 [2024-11-20 17:16:41.655511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.959 [2024-11-20 17:16:41.655514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968700) on tqpair=0x906690 00:22:23.959 [2024-11-20 17:16:41.655525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x906690) 00:22:23.959 [2024-11-20 17:16:41.655534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.959 [2024-11-20 17:16:41.655547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968700, cid 4, qid 0 00:22:23.959 [2024-11-20 17:16:41.655625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.959 [2024-11-20 17:16:41.655630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.959 [2024-11-20 17:16:41.655633] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655636] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x906690): datao=0, datal=8, cccid=4 00:22:23.959 [2024-11-20 17:16:41.655640] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x968700) on tqpair(0x906690): expected_datao=0, payload_size=8 00:22:23.959 [2024-11-20 17:16:41.655644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655649] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.655653] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.696348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.959 [2024-11-20 17:16:41.696358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.959 [2024-11-20 17:16:41.696361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.959 [2024-11-20 17:16:41.696364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968700) on tqpair=0x906690 00:22:23.959 ===================================================== 00:22:23.959 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:23.959 ===================================================== 00:22:23.959 Controller Capabilities/Features 00:22:23.959 ================================ 00:22:23.959 Vendor ID: 0000 00:22:23.959 Subsystem Vendor ID: 0000 00:22:23.959 Serial Number: .................... 00:22:23.959 Model Number: ........................................ 00:22:23.959 Firmware Version: 25.01 00:22:23.959 Recommended Arb Burst: 0 00:22:23.959 IEEE OUI Identifier: 00 00 00 00:22:23.959 Multi-path I/O 00:22:23.959 May have multiple subsystem ports: No 00:22:23.959 May have multiple controllers: No 00:22:23.959 Associated with SR-IOV VF: No 00:22:23.959 Max Data Transfer Size: 131072 00:22:23.959 Max Number of Namespaces: 0 00:22:23.959 Max Number of I/O Queues: 1024 00:22:23.959 NVMe Specification Version (VS): 1.3 00:22:23.959 NVMe Specification Version (Identify): 1.3 00:22:23.959 Maximum Queue Entries: 128 00:22:23.959 Contiguous Queues Required: Yes 00:22:23.959 Arbitration Mechanisms Supported 00:22:23.959 Weighted Round Robin: Not Supported 00:22:23.959 Vendor Specific: Not Supported 00:22:23.959 Reset Timeout: 15000 ms 00:22:23.959 Doorbell Stride: 4 bytes 00:22:23.959 NVM Subsystem Reset: Not Supported 00:22:23.959 Command Sets Supported 00:22:23.959 NVM Command Set: Supported 00:22:23.959 Boot Partition: Not Supported 00:22:23.959 Memory Page Size Minimum: 4096 bytes 00:22:23.959 Memory Page Size Maximum: 4096 bytes 00:22:23.959 Persistent Memory Region: Not Supported 00:22:23.959 Optional Asynchronous Events Supported 00:22:23.959 Namespace Attribute Notices: Not Supported 00:22:23.959 Firmware Activation Notices: Not Supported 00:22:23.959 ANA Change Notices: Not Supported 00:22:23.959 PLE Aggregate Log Change Notices: Not Supported 00:22:23.959 LBA Status Info Alert Notices: Not Supported 00:22:23.959 EGE Aggregate Log Change Notices: Not Supported 00:22:23.959 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.959 Zone Descriptor Change Notices: Not Supported 00:22:23.959 Discovery Log Change Notices: Supported 00:22:23.959 Controller Attributes 00:22:23.959 128-bit Host Identifier: Not Supported 00:22:23.959 Non-Operational Permissive Mode: Not Supported 00:22:23.959 NVM Sets: Not Supported 00:22:23.959 Read Recovery Levels: Not Supported 00:22:23.959 Endurance Groups: Not Supported 00:22:23.959 Predictable Latency Mode: Not Supported 00:22:23.959 Traffic Based Keep ALive: Not Supported 00:22:23.959 Namespace Granularity: Not Supported 00:22:23.959 SQ Associations: Not Supported 00:22:23.959 UUID List: Not Supported 00:22:23.959 Multi-Domain Subsystem: Not Supported 00:22:23.959 Fixed Capacity Management: Not Supported 00:22:23.959 Variable Capacity Management: Not Supported 00:22:23.959 Delete Endurance Group: Not Supported 00:22:23.959 Delete NVM Set: Not Supported 00:22:23.959 Extended LBA Formats Supported: Not Supported 00:22:23.959 Flexible Data Placement Supported: Not Supported 00:22:23.959 00:22:23.959 Controller Memory Buffer Support 00:22:23.959 ================================ 00:22:23.959 Supported: No 00:22:23.959 00:22:23.959 Persistent Memory Region Support 00:22:23.959 ================================ 00:22:23.959 Supported: No 00:22:23.959 00:22:23.959 Admin Command Set Attributes 00:22:23.959 ============================ 00:22:23.959 Security Send/Receive: Not Supported 00:22:23.959 Format NVM: Not Supported 00:22:23.959 Firmware Activate/Download: Not Supported 00:22:23.959 Namespace Management: Not Supported 00:22:23.959 Device Self-Test: Not Supported 00:22:23.959 Directives: Not Supported 00:22:23.959 NVMe-MI: Not Supported 00:22:23.959 Virtualization Management: Not Supported 00:22:23.959 Doorbell Buffer Config: Not Supported 00:22:23.959 Get LBA Status Capability: Not Supported 00:22:23.959 Command & Feature Lockdown Capability: Not Supported 00:22:23.959 Abort Command Limit: 1 00:22:23.959 Async Event Request Limit: 4 00:22:23.959 Number of Firmware Slots: N/A 00:22:23.959 Firmware Slot 1 Read-Only: N/A 00:22:23.959 Firmware Activation Without Reset: N/A 00:22:23.959 Multiple Update Detection Support: N/A 00:22:23.959 Firmware Update Granularity: No Information Provided 00:22:23.959 Per-Namespace SMART Log: No 00:22:23.959 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.960 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:23.960 Command Effects Log Page: Not Supported 00:22:23.960 Get Log Page Extended Data: Supported 00:22:23.960 Telemetry Log Pages: Not Supported 00:22:23.960 Persistent Event Log Pages: Not Supported 00:22:23.960 Supported Log Pages Log Page: May Support 00:22:23.960 Commands Supported & Effects Log Page: Not Supported 00:22:23.960 Feature Identifiers & Effects Log Page:May Support 00:22:23.960 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.960 Data Area 4 for Telemetry Log: Not Supported 00:22:23.960 Error Log Page Entries Supported: 128 00:22:23.960 Keep Alive: Not Supported 00:22:23.960 00:22:23.960 NVM Command Set Attributes 00:22:23.960 ========================== 00:22:23.960 Submission Queue Entry Size 00:22:23.960 Max: 1 00:22:23.960 Min: 1 00:22:23.960 Completion Queue Entry Size 00:22:23.960 Max: 1 00:22:23.960 Min: 1 00:22:23.960 Number of Namespaces: 0 00:22:23.960 Compare Command: Not Supported 00:22:23.960 Write Uncorrectable Command: Not Supported 00:22:23.960 Dataset Management Command: Not Supported 00:22:23.960 Write Zeroes Command: Not Supported 00:22:23.960 Set Features Save Field: Not Supported 00:22:23.960 Reservations: Not Supported 00:22:23.960 Timestamp: Not Supported 00:22:23.960 Copy: Not Supported 00:22:23.960 Volatile Write Cache: Not Present 00:22:23.960 Atomic Write Unit (Normal): 1 00:22:23.960 Atomic Write Unit (PFail): 1 00:22:23.960 Atomic Compare & Write Unit: 1 00:22:23.960 Fused Compare & Write: Supported 00:22:23.960 Scatter-Gather List 00:22:23.960 SGL Command Set: Supported 00:22:23.960 SGL Keyed: Supported 00:22:23.960 SGL Bit Bucket Descriptor: Not Supported 00:22:23.960 SGL Metadata Pointer: Not Supported 00:22:23.960 Oversized SGL: Not Supported 00:22:23.960 SGL Metadata Address: Not Supported 00:22:23.960 SGL Offset: Supported 00:22:23.960 Transport SGL Data Block: Not Supported 00:22:23.960 Replay Protected Memory Block: Not Supported 00:22:23.960 00:22:23.960 Firmware Slot Information 00:22:23.960 ========================= 00:22:23.960 Active slot: 0 00:22:23.960 00:22:23.960 00:22:23.960 Error Log 00:22:23.960 ========= 00:22:23.960 00:22:23.960 Active Namespaces 00:22:23.960 ================= 00:22:23.960 Discovery Log Page 00:22:23.960 ================== 00:22:23.960 Generation Counter: 2 00:22:23.960 Number of Records: 2 00:22:23.960 Record Format: 0 00:22:23.960 00:22:23.960 Discovery Log Entry 0 00:22:23.960 ---------------------- 00:22:23.960 Transport Type: 3 (TCP) 00:22:23.960 Address Family: 1 (IPv4) 00:22:23.960 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:23.960 Entry Flags: 00:22:23.960 Duplicate Returned Information: 1 00:22:23.960 Explicit Persistent Connection Support for Discovery: 1 00:22:23.960 Transport Requirements: 00:22:23.960 Secure Channel: Not Required 00:22:23.960 Port ID: 0 (0x0000) 00:22:23.960 Controller ID: 65535 (0xffff) 00:22:23.960 Admin Max SQ Size: 128 00:22:23.960 Transport Service Identifier: 4420 00:22:23.960 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:23.960 Transport Address: 10.0.0.2 00:22:23.960 Discovery Log Entry 1 00:22:23.960 ---------------------- 00:22:23.960 Transport Type: 3 (TCP) 00:22:23.960 Address Family: 1 (IPv4) 00:22:23.960 Subsystem Type: 2 (NVM Subsystem) 00:22:23.960 Entry Flags: 00:22:23.960 Duplicate Returned Information: 0 00:22:23.960 Explicit Persistent Connection Support for Discovery: 0 00:22:23.960 Transport Requirements: 00:22:23.960 Secure Channel: Not Required 00:22:23.960 Port ID: 0 (0x0000) 00:22:23.960 Controller ID: 65535 (0xffff) 00:22:23.960 Admin Max SQ Size: 128 00:22:23.960 Transport Service Identifier: 4420 00:22:23.960 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:23.960 Transport Address: 10.0.0.2 [2024-11-20 17:16:41.696443] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:23.960 [2024-11-20 17:16:41.696454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968100) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.960 [2024-11-20 17:16:41.696464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968280) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.960 [2024-11-20 17:16:41.696475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968400) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.960 [2024-11-20 17:16:41.696483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.960 [2024-11-20 17:16:41.696496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.960 [2024-11-20 17:16:41.696510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.960 [2024-11-20 17:16:41.696523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.960 [2024-11-20 17:16:41.696582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.960 [2024-11-20 17:16:41.696588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.960 [2024-11-20 17:16:41.696591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.960 [2024-11-20 17:16:41.696613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.960 [2024-11-20 17:16:41.696625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.960 [2024-11-20 17:16:41.696697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.960 [2024-11-20 17:16:41.696702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.960 [2024-11-20 17:16:41.696705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696713] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:23.960 [2024-11-20 17:16:41.696717] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:23.960 [2024-11-20 17:16:41.696724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.960 [2024-11-20 17:16:41.696737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.960 [2024-11-20 17:16:41.696746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.960 [2024-11-20 17:16:41.696805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.960 [2024-11-20 17:16:41.696810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.960 [2024-11-20 17:16:41.696813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.960 [2024-11-20 17:16:41.696839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.960 [2024-11-20 17:16:41.696848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.960 [2024-11-20 17:16:41.696927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.960 [2024-11-20 17:16:41.696932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.960 [2024-11-20 17:16:41.696935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.960 [2024-11-20 17:16:41.696947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.960 [2024-11-20 17:16:41.696954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.696959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.696969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.697903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.697915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.697925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.697987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.697992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.697995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.697998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.698006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.698018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.698027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.698096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.698101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.698104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.698116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.961 [2024-11-20 17:16:41.698128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.961 [2024-11-20 17:16:41.698137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.961 [2024-11-20 17:16:41.698205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.961 [2024-11-20 17:16:41.698211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.961 [2024-11-20 17:16:41.698214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.961 [2024-11-20 17:16:41.698217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.961 [2024-11-20 17:16:41.698226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.698935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.698940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.698943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.698955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.698961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.698966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.698976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.699035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.699041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.699046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.699058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.699069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.699079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.699140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.699145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.699148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.699159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.699166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.699171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.699181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.703210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.703219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.703222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.703225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.703234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.703237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.703240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x906690) 00:22:23.962 [2024-11-20 17:16:41.703246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.962 [2024-11-20 17:16:41.703257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x968580, cid 3, qid 0 00:22:23.962 [2024-11-20 17:16:41.703403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.962 [2024-11-20 17:16:41.703408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.962 [2024-11-20 17:16:41.703411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.962 [2024-11-20 17:16:41.703414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x968580) on tqpair=0x906690 00:22:23.962 [2024-11-20 17:16:41.703421] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:23.962 00:22:23.962 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:23.962 [2024-11-20 17:16:41.739869] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:23.962 [2024-11-20 17:16:41.739902] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576170 ] 00:22:23.963 [2024-11-20 17:16:41.777107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:23.963 [2024-11-20 17:16:41.777145] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.963 [2024-11-20 17:16:41.777149] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.963 [2024-11-20 17:16:41.777161] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.963 [2024-11-20 17:16:41.777169] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.963 [2024-11-20 17:16:41.781380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:23.963 [2024-11-20 17:16:41.781406] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x63f690 0 00:22:23.963 [2024-11-20 17:16:41.789215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.963 [2024-11-20 17:16:41.789229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.963 [2024-11-20 17:16:41.789233] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.963 [2024-11-20 17:16:41.789236] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.963 [2024-11-20 17:16:41.789262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.789267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.789270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.789280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.963 [2024-11-20 17:16:41.789297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.797212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.797221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.797224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.797238] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.963 [2024-11-20 17:16:41.797244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:23.963 [2024-11-20 17:16:41.797249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:23.963 [2024-11-20 17:16:41.797259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.797273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.797288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.797448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.797454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.797457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.797465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:23.963 [2024-11-20 17:16:41.797472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:23.963 [2024-11-20 17:16:41.797481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.797494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.797504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.797595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.797601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.797604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.797612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:23.963 [2024-11-20 17:16:41.797619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.797624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.797636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.797646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.797747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.797753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.797755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.797763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.797771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.797784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.797794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.797856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.797862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.797865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.797869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.797872] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.963 [2024-11-20 17:16:41.797876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.797883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.797990] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:23.963 [2024-11-20 17:16:41.797997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.798003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.798015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.798025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.798090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.798096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.798099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.798106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.963 [2024-11-20 17:16:41.798114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.798126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.798136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.798240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.963 [2024-11-20 17:16:41.798246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.963 [2024-11-20 17:16:41.798249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.963 [2024-11-20 17:16:41.798257] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.963 [2024-11-20 17:16:41.798261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.963 [2024-11-20 17:16:41.798267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:23.963 [2024-11-20 17:16:41.798274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.963 [2024-11-20 17:16:41.798281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.963 [2024-11-20 17:16:41.798285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.963 [2024-11-20 17:16:41.798290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.963 [2024-11-20 17:16:41.798301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.963 [2024-11-20 17:16:41.798403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.963 [2024-11-20 17:16:41.798409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.963 [2024-11-20 17:16:41.798412] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.798416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=4096, cccid=0 00:22:23.964 [2024-11-20 17:16:41.798419] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1100) on tqpair(0x63f690): expected_datao=0, payload_size=4096 00:22:23.964 [2024-11-20 17:16:41.798428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.798441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.798445] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.964 [2024-11-20 17:16:41.839350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.964 [2024-11-20 17:16:41.839354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.964 [2024-11-20 17:16:41.839364] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:23.964 [2024-11-20 17:16:41.839368] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:23.964 [2024-11-20 17:16:41.839373] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:23.964 [2024-11-20 17:16:41.839380] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:23.964 [2024-11-20 17:16:41.839384] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:23.964 [2024-11-20 17:16:41.839389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.964 [2024-11-20 17:16:41.839430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.964 [2024-11-20 17:16:41.839539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.964 [2024-11-20 17:16:41.839545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.964 [2024-11-20 17:16:41.839548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.964 [2024-11-20 17:16:41.839558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.964 [2024-11-20 17:16:41.839574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.964 [2024-11-20 17:16:41.839590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.964 [2024-11-20 17:16:41.839609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.964 [2024-11-20 17:16:41.839624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-11-20 17:16:41.839656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1100, cid 0, qid 0 00:22:23.964 [2024-11-20 17:16:41.839662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1280, cid 1, qid 0 00:22:23.964 [2024-11-20 17:16:41.839666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1400, cid 2, qid 0 00:22:23.964 [2024-11-20 17:16:41.839669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.964 [2024-11-20 17:16:41.839673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.964 [2024-11-20 17:16:41.839772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.964 [2024-11-20 17:16:41.839778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.964 [2024-11-20 17:16:41.839781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.964 [2024-11-20 17:16:41.839791] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:23.964 [2024-11-20 17:16:41.839795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.839814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.839825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.964 [2024-11-20 17:16:41.839835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.964 [2024-11-20 17:16:41.839943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.964 [2024-11-20 17:16:41.839949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.964 [2024-11-20 17:16:41.839952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.839955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.964 [2024-11-20 17:16:41.840010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.840020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:23.964 [2024-11-20 17:16:41.840026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.964 [2024-11-20 17:16:41.840035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.964 [2024-11-20 17:16:41.840045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.964 [2024-11-20 17:16:41.840119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.964 [2024-11-20 17:16:41.840125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.964 [2024-11-20 17:16:41.840128] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840131] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=4096, cccid=4 00:22:23.964 [2024-11-20 17:16:41.840135] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1700) on tqpair(0x63f690): expected_datao=0, payload_size=4096 00:22:23.964 [2024-11-20 17:16:41.840139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840149] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.964 [2024-11-20 17:16:41.840198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.964 [2024-11-20 17:16:41.840209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.964 [2024-11-20 17:16:41.840213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.840221] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:23.965 [2024-11-20 17:16:41.840232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.840254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.840265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.965 [2024-11-20 17:16:41.840356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.965 [2024-11-20 17:16:41.840362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.965 [2024-11-20 17:16:41.840366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=4096, cccid=4 00:22:23.965 [2024-11-20 17:16:41.840373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1700) on tqpair(0x63f690): expected_datao=0, payload_size=4096 00:22:23.965 [2024-11-20 17:16:41.840377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840382] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840385] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.840402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.840406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.840419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.840442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.840451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.965 [2024-11-20 17:16:41.840525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.965 [2024-11-20 17:16:41.840531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.965 [2024-11-20 17:16:41.840534] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840537] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=4096, cccid=4 00:22:23.965 [2024-11-20 17:16:41.840541] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1700) on tqpair(0x63f690): expected_datao=0, payload_size=4096 00:22:23.965 [2024-11-20 17:16:41.840545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840551] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840554] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.840601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.840604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.840613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840647] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:23.965 [2024-11-20 17:16:41.840651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:23.965 [2024-11-20 17:16:41.840655] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:23.965 [2024-11-20 17:16:41.840668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.840679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.840684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.840696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.965 [2024-11-20 17:16:41.840710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.965 [2024-11-20 17:16:41.840715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1880, cid 5, qid 0 00:22:23.965 [2024-11-20 17:16:41.840832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.840838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.840841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.840850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.840854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.840858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1880) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.840868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.840877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.840886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1880, cid 5, qid 0 00:22:23.965 [2024-11-20 17:16:41.840981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.840987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.840990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.840994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1880) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.841001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.841005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.841010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.841019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1880, cid 5, qid 0 00:22:23.965 [2024-11-20 17:16:41.841078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.841084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.841087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.841090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1880) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.841099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.841102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.841107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.841116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1880, cid 5, qid 0 00:22:23.965 [2024-11-20 17:16:41.841184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.965 [2024-11-20 17:16:41.841190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.965 [2024-11-20 17:16:41.841193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.841196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1880) on tqpair=0x63f690 00:22:23.965 [2024-11-20 17:16:41.845213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.845220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.845225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.845231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.845234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.845240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.845246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.965 [2024-11-20 17:16:41.845249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x63f690) 00:22:23.965 [2024-11-20 17:16:41.845254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.965 [2024-11-20 17:16:41.845260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x63f690) 00:22:23.966 [2024-11-20 17:16:41.845268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.966 [2024-11-20 17:16:41.845280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1880, cid 5, qid 0 00:22:23.966 [2024-11-20 17:16:41.845284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1700, cid 4, qid 0 00:22:23.966 [2024-11-20 17:16:41.845289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1a00, cid 6, qid 0 00:22:23.966 [2024-11-20 17:16:41.845292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1b80, cid 7, qid 0 00:22:23.966 [2024-11-20 17:16:41.845544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.966 [2024-11-20 17:16:41.845550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.966 [2024-11-20 17:16:41.845553] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=8192, cccid=5 00:22:23.966 [2024-11-20 17:16:41.845560] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1880) on tqpair(0x63f690): expected_datao=0, payload_size=8192 00:22:23.966 [2024-11-20 17:16:41.845563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845586] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.966 [2024-11-20 17:16:41.845600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.966 [2024-11-20 17:16:41.845603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=512, cccid=4 00:22:23.966 [2024-11-20 17:16:41.845609] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1700) on tqpair(0x63f690): expected_datao=0, payload_size=512 00:22:23.966 [2024-11-20 17:16:41.845613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845623] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845626] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.966 [2024-11-20 17:16:41.845636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.966 [2024-11-20 17:16:41.845638] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=512, cccid=6 00:22:23.966 [2024-11-20 17:16:41.845646] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1a00) on tqpair(0x63f690): expected_datao=0, payload_size=512 00:22:23.966 [2024-11-20 17:16:41.845649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845654] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845658] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.966 [2024-11-20 17:16:41.845667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.966 [2024-11-20 17:16:41.845670] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845673] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63f690): datao=0, datal=4096, cccid=7 00:22:23.966 [2024-11-20 17:16:41.845677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a1b80) on tqpair(0x63f690): expected_datao=0, payload_size=4096 00:22:23.966 [2024-11-20 17:16:41.845680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845686] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845689] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.966 [2024-11-20 17:16:41.845701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.966 [2024-11-20 17:16:41.845704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1880) on tqpair=0x63f690 00:22:23.966 [2024-11-20 17:16:41.845717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.966 [2024-11-20 17:16:41.845722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.966 [2024-11-20 17:16:41.845725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1700) on tqpair=0x63f690 00:22:23.966 [2024-11-20 17:16:41.845737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.966 [2024-11-20 17:16:41.845742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.966 [2024-11-20 17:16:41.845744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1a00) on tqpair=0x63f690 00:22:23.966 [2024-11-20 17:16:41.845753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.966 [2024-11-20 17:16:41.845758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.966 [2024-11-20 17:16:41.845762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.966 [2024-11-20 17:16:41.845765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1b80) on tqpair=0x63f690 00:22:23.966 ===================================================== 00:22:23.966 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.966 ===================================================== 00:22:23.966 Controller Capabilities/Features 00:22:23.966 ================================ 00:22:23.966 Vendor ID: 8086 00:22:23.966 Subsystem Vendor ID: 8086 00:22:23.966 Serial Number: SPDK00000000000001 00:22:23.966 Model Number: SPDK bdev Controller 00:22:23.966 Firmware Version: 25.01 00:22:23.966 Recommended Arb Burst: 6 00:22:23.966 IEEE OUI Identifier: e4 d2 5c 00:22:23.966 Multi-path I/O 00:22:23.966 May have multiple subsystem ports: Yes 00:22:23.966 May have multiple controllers: Yes 00:22:23.966 Associated with SR-IOV VF: No 00:22:23.966 Max Data Transfer Size: 131072 00:22:23.966 Max Number of Namespaces: 32 00:22:23.966 Max Number of I/O Queues: 127 00:22:23.966 NVMe Specification Version (VS): 1.3 00:22:23.966 NVMe Specification Version (Identify): 1.3 00:22:23.966 Maximum Queue Entries: 128 00:22:23.966 Contiguous Queues Required: Yes 00:22:23.966 Arbitration Mechanisms Supported 00:22:23.966 Weighted Round Robin: Not Supported 00:22:23.966 Vendor Specific: Not Supported 00:22:23.966 Reset Timeout: 15000 ms 00:22:23.966 Doorbell Stride: 4 bytes 00:22:23.966 NVM Subsystem Reset: Not Supported 00:22:23.966 Command Sets Supported 00:22:23.966 NVM Command Set: Supported 00:22:23.966 Boot Partition: Not Supported 00:22:23.966 Memory Page Size Minimum: 4096 bytes 00:22:23.966 Memory Page Size Maximum: 4096 bytes 00:22:23.966 Persistent Memory Region: Not Supported 00:22:23.966 Optional Asynchronous Events Supported 00:22:23.966 Namespace Attribute Notices: Supported 00:22:23.966 Firmware Activation Notices: Not Supported 00:22:23.966 ANA Change Notices: Not Supported 00:22:23.966 PLE Aggregate Log Change Notices: Not Supported 00:22:23.966 LBA Status Info Alert Notices: Not Supported 00:22:23.966 EGE Aggregate Log Change Notices: Not Supported 00:22:23.966 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.966 Zone Descriptor Change Notices: Not Supported 00:22:23.966 Discovery Log Change Notices: Not Supported 00:22:23.966 Controller Attributes 00:22:23.966 128-bit Host Identifier: Supported 00:22:23.966 Non-Operational Permissive Mode: Not Supported 00:22:23.966 NVM Sets: Not Supported 00:22:23.966 Read Recovery Levels: Not Supported 00:22:23.966 Endurance Groups: Not Supported 00:22:23.966 Predictable Latency Mode: Not Supported 00:22:23.966 Traffic Based Keep ALive: Not Supported 00:22:23.966 Namespace Granularity: Not Supported 00:22:23.966 SQ Associations: Not Supported 00:22:23.966 UUID List: Not Supported 00:22:23.966 Multi-Domain Subsystem: Not Supported 00:22:23.966 Fixed Capacity Management: Not Supported 00:22:23.966 Variable Capacity Management: Not Supported 00:22:23.966 Delete Endurance Group: Not Supported 00:22:23.966 Delete NVM Set: Not Supported 00:22:23.966 Extended LBA Formats Supported: Not Supported 00:22:23.966 Flexible Data Placement Supported: Not Supported 00:22:23.966 00:22:23.966 Controller Memory Buffer Support 00:22:23.966 ================================ 00:22:23.966 Supported: No 00:22:23.966 00:22:23.966 Persistent Memory Region Support 00:22:23.966 ================================ 00:22:23.966 Supported: No 00:22:23.966 00:22:23.966 Admin Command Set Attributes 00:22:23.966 ============================ 00:22:23.966 Security Send/Receive: Not Supported 00:22:23.966 Format NVM: Not Supported 00:22:23.966 Firmware Activate/Download: Not Supported 00:22:23.966 Namespace Management: Not Supported 00:22:23.966 Device Self-Test: Not Supported 00:22:23.966 Directives: Not Supported 00:22:23.966 NVMe-MI: Not Supported 00:22:23.966 Virtualization Management: Not Supported 00:22:23.966 Doorbell Buffer Config: Not Supported 00:22:23.966 Get LBA Status Capability: Not Supported 00:22:23.966 Command & Feature Lockdown Capability: Not Supported 00:22:23.966 Abort Command Limit: 4 00:22:23.966 Async Event Request Limit: 4 00:22:23.966 Number of Firmware Slots: N/A 00:22:23.966 Firmware Slot 1 Read-Only: N/A 00:22:23.966 Firmware Activation Without Reset: N/A 00:22:23.966 Multiple Update Detection Support: N/A 00:22:23.966 Firmware Update Granularity: No Information Provided 00:22:23.966 Per-Namespace SMART Log: No 00:22:23.966 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.966 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:23.967 Command Effects Log Page: Supported 00:22:23.967 Get Log Page Extended Data: Supported 00:22:23.967 Telemetry Log Pages: Not Supported 00:22:23.967 Persistent Event Log Pages: Not Supported 00:22:23.967 Supported Log Pages Log Page: May Support 00:22:23.967 Commands Supported & Effects Log Page: Not Supported 00:22:23.967 Feature Identifiers & Effects Log Page:May Support 00:22:23.967 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.967 Data Area 4 for Telemetry Log: Not Supported 00:22:23.967 Error Log Page Entries Supported: 128 00:22:23.967 Keep Alive: Supported 00:22:23.967 Keep Alive Granularity: 10000 ms 00:22:23.967 00:22:23.967 NVM Command Set Attributes 00:22:23.967 ========================== 00:22:23.967 Submission Queue Entry Size 00:22:23.967 Max: 64 00:22:23.967 Min: 64 00:22:23.967 Completion Queue Entry Size 00:22:23.967 Max: 16 00:22:23.967 Min: 16 00:22:23.967 Number of Namespaces: 32 00:22:23.967 Compare Command: Supported 00:22:23.967 Write Uncorrectable Command: Not Supported 00:22:23.967 Dataset Management Command: Supported 00:22:23.967 Write Zeroes Command: Supported 00:22:23.967 Set Features Save Field: Not Supported 00:22:23.967 Reservations: Supported 00:22:23.967 Timestamp: Not Supported 00:22:23.967 Copy: Supported 00:22:23.967 Volatile Write Cache: Present 00:22:23.967 Atomic Write Unit (Normal): 1 00:22:23.967 Atomic Write Unit (PFail): 1 00:22:23.967 Atomic Compare & Write Unit: 1 00:22:23.967 Fused Compare & Write: Supported 00:22:23.967 Scatter-Gather List 00:22:23.967 SGL Command Set: Supported 00:22:23.967 SGL Keyed: Supported 00:22:23.967 SGL Bit Bucket Descriptor: Not Supported 00:22:23.967 SGL Metadata Pointer: Not Supported 00:22:23.967 Oversized SGL: Not Supported 00:22:23.967 SGL Metadata Address: Not Supported 00:22:23.967 SGL Offset: Supported 00:22:23.967 Transport SGL Data Block: Not Supported 00:22:23.967 Replay Protected Memory Block: Not Supported 00:22:23.967 00:22:23.967 Firmware Slot Information 00:22:23.967 ========================= 00:22:23.967 Active slot: 1 00:22:23.967 Slot 1 Firmware Revision: 25.01 00:22:23.967 00:22:23.967 00:22:23.967 Commands Supported and Effects 00:22:23.967 ============================== 00:22:23.967 Admin Commands 00:22:23.967 -------------- 00:22:23.967 Get Log Page (02h): Supported 00:22:23.967 Identify (06h): Supported 00:22:23.967 Abort (08h): Supported 00:22:23.967 Set Features (09h): Supported 00:22:23.967 Get Features (0Ah): Supported 00:22:23.967 Asynchronous Event Request (0Ch): Supported 00:22:23.967 Keep Alive (18h): Supported 00:22:23.967 I/O Commands 00:22:23.967 ------------ 00:22:23.967 Flush (00h): Supported LBA-Change 00:22:23.967 Write (01h): Supported LBA-Change 00:22:23.967 Read (02h): Supported 00:22:23.967 Compare (05h): Supported 00:22:23.967 Write Zeroes (08h): Supported LBA-Change 00:22:23.967 Dataset Management (09h): Supported LBA-Change 00:22:23.967 Copy (19h): Supported LBA-Change 00:22:23.967 00:22:23.967 Error Log 00:22:23.967 ========= 00:22:23.967 00:22:23.967 Arbitration 00:22:23.967 =========== 00:22:23.967 Arbitration Burst: 1 00:22:23.967 00:22:23.967 Power Management 00:22:23.967 ================ 00:22:23.967 Number of Power States: 1 00:22:23.967 Current Power State: Power State #0 00:22:23.967 Power State #0: 00:22:23.967 Max Power: 0.00 W 00:22:23.967 Non-Operational State: Operational 00:22:23.967 Entry Latency: Not Reported 00:22:23.967 Exit Latency: Not Reported 00:22:23.967 Relative Read Throughput: 0 00:22:23.967 Relative Read Latency: 0 00:22:23.967 Relative Write Throughput: 0 00:22:23.967 Relative Write Latency: 0 00:22:23.967 Idle Power: Not Reported 00:22:23.967 Active Power: Not Reported 00:22:23.967 Non-Operational Permissive Mode: Not Supported 00:22:23.967 00:22:23.967 Health Information 00:22:23.967 ================== 00:22:23.967 Critical Warnings: 00:22:23.967 Available Spare Space: OK 00:22:23.967 Temperature: OK 00:22:23.967 Device Reliability: OK 00:22:23.967 Read Only: No 00:22:23.967 Volatile Memory Backup: OK 00:22:23.967 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:23.967 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:23.967 Available Spare: 0% 00:22:23.967 Available Spare Threshold: 0% 00:22:23.967 Life Percentage Used:[2024-11-20 17:16:41.845845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.845849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x63f690) 00:22:23.967 [2024-11-20 17:16:41.845855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-11-20 17:16:41.845866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1b80, cid 7, qid 0 00:22:23.967 [2024-11-20 17:16:41.845985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.967 [2024-11-20 17:16:41.845991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.967 [2024-11-20 17:16:41.845994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.845997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1b80) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846023] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:23.967 [2024-11-20 17:16:41.846032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1100) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.967 [2024-11-20 17:16:41.846042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1280) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.967 [2024-11-20 17:16:41.846050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1400) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.967 [2024-11-20 17:16:41.846059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.967 [2024-11-20 17:16:41.846069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.967 [2024-11-20 17:16:41.846081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-11-20 17:16:41.846092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.967 [2024-11-20 17:16:41.846185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.967 [2024-11-20 17:16:41.846191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.967 [2024-11-20 17:16:41.846194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.967 [2024-11-20 17:16:41.846229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-11-20 17:16:41.846243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.967 [2024-11-20 17:16:41.846336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.967 [2024-11-20 17:16:41.846342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.967 [2024-11-20 17:16:41.846345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846352] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:23.967 [2024-11-20 17:16:41.846356] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:23.967 [2024-11-20 17:16:41.846364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.967 [2024-11-20 17:16:41.846379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.967 [2024-11-20 17:16:41.846389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.967 [2024-11-20 17:16:41.846449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.967 [2024-11-20 17:16:41.846455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.967 [2024-11-20 17:16:41.846458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.967 [2024-11-20 17:16:41.846469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.967 [2024-11-20 17:16:41.846472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.846481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.846490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.846586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.846592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.846595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.846606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.846618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.846626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.846688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.846693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.846696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.846707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.846719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.846728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.846789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.846795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.846797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.846809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.846822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.846831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.846896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.846901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.846904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.846916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.846923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.846928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.846937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.968 [2024-11-20 17:16:41.847674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.968 [2024-11-20 17:16:41.847686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.968 [2024-11-20 17:16:41.847695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.968 [2024-11-20 17:16:41.847761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.968 [2024-11-20 17:16:41.847767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.968 [2024-11-20 17:16:41.847770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.968 [2024-11-20 17:16:41.847773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.847782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.847793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.847803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.847868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.847874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.847877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.847888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.847899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.847908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.847986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.847992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.847995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.847998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.969 [2024-11-20 17:16:41.848811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.969 [2024-11-20 17:16:41.848816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.969 [2024-11-20 17:16:41.848819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.969 [2024-11-20 17:16:41.848830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.969 [2024-11-20 17:16:41.848836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.969 [2024-11-20 17:16:41.848842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.969 [2024-11-20 17:16:41.848851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.970 [2024-11-20 17:16:41.848923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.970 [2024-11-20 17:16:41.848928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.970 [2024-11-20 17:16:41.848931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.848934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.970 [2024-11-20 17:16:41.848942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.848946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.848948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.970 [2024-11-20 17:16:41.848954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-11-20 17:16:41.848963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.970 [2024-11-20 17:16:41.849025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.970 [2024-11-20 17:16:41.849030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.970 [2024-11-20 17:16:41.849033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.970 [2024-11-20 17:16:41.849047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.970 [2024-11-20 17:16:41.849058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-11-20 17:16:41.849067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.970 [2024-11-20 17:16:41.849126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.970 [2024-11-20 17:16:41.849131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.970 [2024-11-20 17:16:41.849134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.970 [2024-11-20 17:16:41.849145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.849151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.970 [2024-11-20 17:16:41.849157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-11-20 17:16:41.849166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.970 [2024-11-20 17:16:41.853213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.970 [2024-11-20 17:16:41.853221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.970 [2024-11-20 17:16:41.853224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.853227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.970 [2024-11-20 17:16:41.853236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.853239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.853242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63f690) 00:22:23.970 [2024-11-20 17:16:41.853248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.970 [2024-11-20 17:16:41.853259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a1580, cid 3, qid 0 00:22:23.970 [2024-11-20 17:16:41.853411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.970 [2024-11-20 17:16:41.853417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.970 [2024-11-20 17:16:41.853420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.970 [2024-11-20 17:16:41.853423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6a1580) on tqpair=0x63f690 00:22:23.970 [2024-11-20 17:16:41.853429] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:23.970 0% 00:22:23.970 Data Units Read: 0 00:22:23.970 Data Units Written: 0 00:22:23.970 Host Read Commands: 0 00:22:23.970 Host Write Commands: 0 00:22:23.970 Controller Busy Time: 0 minutes 00:22:23.970 Power Cycles: 0 00:22:23.970 Power On Hours: 0 hours 00:22:23.970 Unsafe Shutdowns: 0 00:22:23.970 Unrecoverable Media Errors: 0 00:22:23.970 Lifetime Error Log Entries: 0 00:22:23.970 Warning Temperature Time: 0 minutes 00:22:23.970 Critical Temperature Time: 0 minutes 00:22:23.970 00:22:23.970 Number of Queues 00:22:23.970 ================ 00:22:23.970 Number of I/O Submission Queues: 127 00:22:23.970 Number of I/O Completion Queues: 127 00:22:23.970 00:22:23.970 Active Namespaces 00:22:23.970 ================= 00:22:23.970 Namespace ID:1 00:22:23.970 Error Recovery Timeout: Unlimited 00:22:23.970 Command Set Identifier: NVM (00h) 00:22:23.970 Deallocate: Supported 00:22:23.970 Deallocated/Unwritten Error: Not Supported 00:22:23.970 Deallocated Read Value: Unknown 00:22:23.970 Deallocate in Write Zeroes: Not Supported 00:22:23.970 Deallocated Guard Field: 0xFFFF 00:22:23.970 Flush: Supported 00:22:23.970 Reservation: Supported 00:22:23.970 Namespace Sharing Capabilities: Multiple Controllers 00:22:23.970 Size (in LBAs): 131072 (0GiB) 00:22:23.970 Capacity (in LBAs): 131072 (0GiB) 00:22:23.970 Utilization (in LBAs): 131072 (0GiB) 00:22:23.970 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:23.970 EUI64: ABCDEF0123456789 00:22:23.970 UUID: 044e6c5d-2cfe-4a11-8576-da1ce193af84 00:22:23.970 Thin Provisioning: Not Supported 00:22:23.970 Per-NS Atomic Units: Yes 00:22:23.970 Atomic Boundary Size (Normal): 0 00:22:23.970 Atomic Boundary Size (PFail): 0 00:22:23.970 Atomic Boundary Offset: 0 00:22:23.970 Maximum Single Source Range Length: 65535 00:22:23.970 Maximum Copy Length: 65535 00:22:23.970 Maximum Source Range Count: 1 00:22:23.970 NGUID/EUI64 Never Reused: No 00:22:23.970 Namespace Write Protected: No 00:22:23.970 Number of LBA Formats: 1 00:22:23.970 Current LBA Format: LBA Format #00 00:22:23.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:23.970 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.970 rmmod nvme_tcp 00:22:23.970 rmmod nvme_fabrics 00:22:23.970 rmmod nvme_keyring 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2576137 ']' 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2576137 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2576137 ']' 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2576137 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.970 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2576137 00:22:24.230 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.230 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.230 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2576137' 00:22:24.230 killing process with pid 2576137 00:22:24.230 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2576137 00:22:24.230 17:16:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2576137 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.230 17:16:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.769 00:22:26.769 real 0m9.302s 00:22:26.769 user 0m5.394s 00:22:26.769 sys 0m4.885s 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 ************************************ 00:22:26.769 END TEST nvmf_identify 00:22:26.769 ************************************ 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 ************************************ 00:22:26.769 START TEST nvmf_perf 00:22:26.769 ************************************ 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.769 * Looking for test storage... 00:22:26.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.769 --rc genhtml_branch_coverage=1 00:22:26.769 --rc genhtml_function_coverage=1 00:22:26.769 --rc genhtml_legend=1 00:22:26.769 --rc geninfo_all_blocks=1 00:22:26.769 --rc geninfo_unexecuted_blocks=1 00:22:26.769 00:22:26.769 ' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.769 --rc genhtml_branch_coverage=1 00:22:26.769 --rc genhtml_function_coverage=1 00:22:26.769 --rc genhtml_legend=1 00:22:26.769 --rc geninfo_all_blocks=1 00:22:26.769 --rc geninfo_unexecuted_blocks=1 00:22:26.769 00:22:26.769 ' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.769 --rc genhtml_branch_coverage=1 00:22:26.769 --rc genhtml_function_coverage=1 00:22:26.769 --rc genhtml_legend=1 00:22:26.769 --rc geninfo_all_blocks=1 00:22:26.769 --rc geninfo_unexecuted_blocks=1 00:22:26.769 00:22:26.769 ' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.769 --rc genhtml_branch_coverage=1 00:22:26.769 --rc genhtml_function_coverage=1 00:22:26.769 --rc genhtml_legend=1 00:22:26.769 --rc geninfo_all_blocks=1 00:22:26.769 --rc geninfo_unexecuted_blocks=1 00:22:26.769 00:22:26.769 ' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.769 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.770 17:16:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.341 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.342 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.342 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:22:33.342 00:22:33.342 --- 10.0.0.2 ping statistics --- 00:22:33.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.342 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:33.342 00:22:33.342 --- 10.0.0.1 ping statistics --- 00:22:33.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.342 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2579721 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2579721 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2579721 ']' 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.342 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.342 [2024-11-20 17:16:50.582737] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:33.342 [2024-11-20 17:16:50.582778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.342 [2024-11-20 17:16:50.661330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.342 [2024-11-20 17:16:50.704330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.343 [2024-11-20 17:16:50.704366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.343 [2024-11-20 17:16:50.704373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.343 [2024-11-20 17:16:50.704380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.343 [2024-11-20 17:16:50.704385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.343 [2024-11-20 17:16:50.706001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.343 [2024-11-20 17:16:50.706110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.343 [2024-11-20 17:16:50.706262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.343 [2024-11-20 17:16:50.706264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:33.343 17:16:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:35.872 17:16:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:35.872 17:16:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:36.130 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:36.130 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:36.389 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:36.389 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:36.389 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:36.389 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:36.389 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.647 [2024-11-20 17:16:54.454557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.647 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.647 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.647 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.905 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.905 17:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:37.163 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.421 [2024-11-20 17:16:55.250738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.422 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:37.680 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:37.680 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:37.680 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:37.680 17:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:39.055 Initializing NVMe Controllers 00:22:39.055 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:39.055 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:39.055 Initialization complete. Launching workers. 00:22:39.055 ======================================================== 00:22:39.055 Latency(us) 00:22:39.055 Device Information : IOPS MiB/s Average min max 00:22:39.055 PCIE (0000:5e:00.0) NSID 1 from core 0: 97770.72 381.92 326.75 34.68 4761.13 00:22:39.055 ======================================================== 00:22:39.055 Total : 97770.72 381.92 326.75 34.68 4761.13 00:22:39.055 00:22:39.055 17:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:40.431 Initializing NVMe Controllers 00:22:40.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.431 Initialization complete. Launching workers. 00:22:40.431 ======================================================== 00:22:40.431 Latency(us) 00:22:40.431 Device Information : IOPS MiB/s Average min max 00:22:40.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.75 0.38 10560.18 109.52 45681.63 00:22:40.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.87 0.20 19417.78 6984.96 47905.73 00:22:40.431 ======================================================== 00:22:40.431 Total : 149.62 0.58 13630.82 109.52 47905.73 00:22:40.431 00:22:40.431 17:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:41.804 Initializing NVMe Controllers 00:22:41.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:41.804 Initialization complete. Launching workers. 00:22:41.804 ======================================================== 00:22:41.804 Latency(us) 00:22:41.804 Device Information : IOPS MiB/s Average min max 00:22:41.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11159.56 43.59 2866.67 453.36 8820.51 00:22:41.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3811.53 14.89 8393.79 5871.83 23310.34 00:22:41.804 ======================================================== 00:22:41.804 Total : 14971.09 58.48 4273.83 453.36 23310.34 00:22:41.804 00:22:41.804 17:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:41.804 17:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:41.804 17:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:44.334 Initializing NVMe Controllers 00:22:44.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.334 Controller IO queue size 128, less than required. 00:22:44.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.334 Controller IO queue size 128, less than required. 00:22:44.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:44.334 Initialization complete. Launching workers. 00:22:44.334 ======================================================== 00:22:44.334 Latency(us) 00:22:44.334 Device Information : IOPS MiB/s Average min max 00:22:44.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.63 445.16 73165.15 47056.22 125824.09 00:22:44.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.38 151.59 217313.50 86951.42 335124.69 00:22:44.334 ======================================================== 00:22:44.334 Total : 2387.01 596.75 109783.36 47056.22 335124.69 00:22:44.334 00:22:44.334 17:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:44.334 No valid NVMe controllers or AIO or URING devices found 00:22:44.334 Initializing NVMe Controllers 00:22:44.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.334 Controller IO queue size 128, less than required. 00:22:44.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.334 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:44.334 Controller IO queue size 128, less than required. 00:22:44.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.334 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:44.334 WARNING: Some requested NVMe devices were skipped 00:22:44.334 17:17:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:46.858 Initializing NVMe Controllers 00:22:46.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.858 Controller IO queue size 128, less than required. 00:22:46.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.858 Controller IO queue size 128, less than required. 00:22:46.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.858 Initialization complete. Launching workers. 00:22:46.858 00:22:46.858 ==================== 00:22:46.858 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:46.858 TCP transport: 00:22:46.858 polls: 15150 00:22:46.858 idle_polls: 11673 00:22:46.858 sock_completions: 3477 00:22:46.858 nvme_completions: 6447 00:22:46.858 submitted_requests: 9664 00:22:46.858 queued_requests: 1 00:22:46.858 00:22:46.858 ==================== 00:22:46.858 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:46.858 TCP transport: 00:22:46.858 polls: 15031 00:22:46.858 idle_polls: 11409 00:22:46.858 sock_completions: 3622 00:22:46.858 nvme_completions: 6491 00:22:46.858 submitted_requests: 9702 00:22:46.858 queued_requests: 1 00:22:46.858 ======================================================== 00:22:46.858 Latency(us) 00:22:46.858 Device Information : IOPS MiB/s Average min max 00:22:46.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1610.30 402.58 81454.41 55771.80 128922.91 00:22:46.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1621.30 405.32 79276.97 41014.33 129601.89 00:22:46.858 ======================================================== 00:22:46.858 Total : 3231.60 807.90 80361.98 41014.33 129601.89 00:22:46.858 00:22:47.116 17:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:47.116 17:17:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.116 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.116 rmmod nvme_tcp 00:22:47.375 rmmod nvme_fabrics 00:22:47.375 rmmod nvme_keyring 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2579721 ']' 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2579721 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2579721 ']' 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2579721 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2579721 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2579721' 00:22:47.375 killing process with pid 2579721 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2579721 00:22:47.375 17:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2579721 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.275 17:17:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.807 00:22:51.807 real 0m25.008s 00:22:51.807 user 1m5.719s 00:22:51.807 sys 0m8.398s 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 ************************************ 00:22:51.807 END TEST nvmf_perf 00:22:51.807 ************************************ 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 ************************************ 00:22:51.807 START TEST nvmf_fio_host 00:22:51.807 ************************************ 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:51.807 * Looking for test storage... 00:22:51.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.807 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.808 --rc genhtml_branch_coverage=1 00:22:51.808 --rc genhtml_function_coverage=1 00:22:51.808 --rc genhtml_legend=1 00:22:51.808 --rc geninfo_all_blocks=1 00:22:51.808 --rc geninfo_unexecuted_blocks=1 00:22:51.808 00:22:51.808 ' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.808 --rc genhtml_branch_coverage=1 00:22:51.808 --rc genhtml_function_coverage=1 00:22:51.808 --rc genhtml_legend=1 00:22:51.808 --rc geninfo_all_blocks=1 00:22:51.808 --rc geninfo_unexecuted_blocks=1 00:22:51.808 00:22:51.808 ' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.808 --rc genhtml_branch_coverage=1 00:22:51.808 --rc genhtml_function_coverage=1 00:22:51.808 --rc genhtml_legend=1 00:22:51.808 --rc geninfo_all_blocks=1 00:22:51.808 --rc geninfo_unexecuted_blocks=1 00:22:51.808 00:22:51.808 ' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.808 --rc genhtml_branch_coverage=1 00:22:51.808 --rc genhtml_function_coverage=1 00:22:51.808 --rc genhtml_legend=1 00:22:51.808 --rc geninfo_all_blocks=1 00:22:51.808 --rc geninfo_unexecuted_blocks=1 00:22:51.808 00:22:51.808 ' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.808 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.809 17:17:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.498 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.499 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.499 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:22:58.499 00:22:58.499 --- 10.0.0.2 ping statistics --- 00:22:58.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.499 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:22:58.499 00:22:58.499 --- 10.0.0.1 ping statistics --- 00:22:58.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.499 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2586028 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2586028 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2586028 ']' 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.499 17:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.499 [2024-11-20 17:17:15.617037] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:22:58.499 [2024-11-20 17:17:15.617081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.499 [2024-11-20 17:17:15.691549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.499 [2024-11-20 17:17:15.731972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.499 [2024-11-20 17:17:15.732007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.499 [2024-11-20 17:17:15.732014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.499 [2024-11-20 17:17:15.732019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.499 [2024-11-20 17:17:15.732024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.499 [2024-11-20 17:17:15.733614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.499 [2024-11-20 17:17:15.733720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.500 [2024-11-20 17:17:15.733820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.500 [2024-11-20 17:17:15.733820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.500 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.500 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:58.500 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:58.758 [2024-11-20 17:17:16.629350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.758 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:58.758 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.758 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.758 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:59.017 Malloc1 00:22:59.017 17:17:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.276 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:59.276 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.534 [2024-11-20 17:17:17.452698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.534 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:59.793 17:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:00.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:00.052 fio-3.35 00:23:00.052 Starting 1 thread 00:23:02.612 [2024-11-20 17:17:20.337297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0130 is same with the state(6) to be set 00:23:02.612 [2024-11-20 17:17:20.337352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0130 is same with the state(6) to be set 00:23:02.612 00:23:02.612 test: (groupid=0, jobs=1): err= 0: pid=2586630: Wed Nov 20 17:17:20 2024 00:23:02.612 read: IOPS=12.0k, BW=46.8MiB/s (49.0MB/s)(93.7MiB/2005msec) 00:23:02.612 slat (nsec): min=1537, max=241484, avg=1702.73, stdev=2211.57 00:23:02.612 clat (usec): min=3129, max=10570, avg=5894.91, stdev=476.74 00:23:02.612 lat (usec): min=3160, max=10572, avg=5896.61, stdev=476.70 00:23:02.612 clat percentiles (usec): 00:23:02.612 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5538], 00:23:02.612 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:23:02.612 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:23:02.612 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 9765], 00:23:02.612 | 99.99th=[10028] 00:23:02.612 bw ( KiB/s): min=47064, max=48312, per=99.95%, avg=47846.00, stdev=554.50, samples=4 00:23:02.612 iops : min=11766, max=12078, avg=11961.50, stdev=138.63, samples=4 00:23:02.612 write: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.3MiB/2005msec); 0 zone resets 00:23:02.612 slat (nsec): min=1565, max=225065, avg=1756.40, stdev=1633.08 00:23:02.612 clat (usec): min=2433, max=9294, avg=4787.18, stdev=374.50 00:23:02.612 lat (usec): min=2448, max=9296, avg=4788.94, stdev=374.54 00:23:02.612 clat percentiles (usec): 00:23:02.612 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:02.612 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:02.612 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:02.612 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6980], 99.95th=[ 7832], 00:23:02.612 | 99.99th=[ 8717] 00:23:02.612 bw ( KiB/s): min=47104, max=48256, per=100.00%, avg=47666.00, stdev=471.50, samples=4 00:23:02.612 iops : min=11776, max=12064, avg=11916.50, stdev=117.87, samples=4 00:23:02.612 lat (msec) : 4=0.78%, 10=99.21%, 20=0.01% 00:23:02.612 cpu : usr=73.30%, sys=25.75%, ctx=108, majf=0, minf=3 00:23:02.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:02.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:02.612 issued rwts: total=23996,23887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:02.612 00:23:02.612 Run status group 0 (all jobs): 00:23:02.612 READ: bw=46.8MiB/s (49.0MB/s), 46.8MiB/s-46.8MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.3MB), run=2005-2005msec 00:23:02.612 WRITE: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.8MB), run=2005-2005msec 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:02.612 17:17:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:02.874 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:02.874 fio-3.35 00:23:02.874 Starting 1 thread 00:23:04.774 [2024-11-20 17:17:22.606001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b9e0 is same with the state(6) to be set 00:23:05.032 00:23:05.032 test: (groupid=0, jobs=1): err= 0: pid=2587193: Wed Nov 20 17:17:23 2024 00:23:05.032 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(342MiB/2004msec) 00:23:05.032 slat (nsec): min=2465, max=87569, avg=2822.38, stdev=1332.71 00:23:05.032 clat (usec): min=1509, max=49878, avg=6860.67, stdev=3366.82 00:23:05.032 lat (usec): min=1511, max=49881, avg=6863.50, stdev=3366.88 00:23:05.032 clat percentiles (usec): 00:23:05.032 | 1.00th=[ 3621], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:23:05.032 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 7046], 00:23:05.032 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8586], 95.00th=[ 9372], 00:23:05.032 | 99.00th=[11600], 99.50th=[42730], 99.90th=[49021], 99.95th=[49546], 00:23:05.032 | 99.99th=[49546] 00:23:05.032 bw ( KiB/s): min=81312, max=98176, per=51.05%, avg=89208.00, stdev=6969.31, samples=4 00:23:05.032 iops : min= 5082, max= 6136, avg=5575.50, stdev=435.58, samples=4 00:23:05.032 write: IOPS=6679, BW=104MiB/s (109MB/s)(183MiB/1751msec); 0 zone resets 00:23:05.032 slat (usec): min=28, max=255, avg=31.32, stdev= 6.41 00:23:05.032 clat (usec): min=2574, max=14515, avg=8469.37, stdev=1382.53 00:23:05.032 lat (usec): min=2604, max=14545, avg=8500.70, stdev=1383.72 00:23:05.032 clat percentiles (usec): 00:23:05.032 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:23:05.032 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:23:05.032 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11076], 00:23:05.032 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13698], 99.95th=[13829], 00:23:05.032 | 99.99th=[14222] 00:23:05.032 bw ( KiB/s): min=85664, max=102528, per=87.21%, avg=93192.00, stdev=6963.82, samples=4 00:23:05.032 iops : min= 5354, max= 6408, avg=5824.50, stdev=435.24, samples=4 00:23:05.032 lat (msec) : 2=0.03%, 4=1.84%, 10=91.42%, 20=6.33%, 50=0.38% 00:23:05.032 cpu : usr=83.92%, sys=14.73%, ctx=90, majf=0, minf=3 00:23:05.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:05.032 issued rwts: total=21888,11695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:05.032 00:23:05.032 Run status group 0 (all jobs): 00:23:05.032 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=342MiB (359MB), run=2004-2004msec 00:23:05.032 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=183MiB (192MB), run=1751-1751msec 00:23:05.032 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.290 rmmod nvme_tcp 00:23:05.290 rmmod nvme_fabrics 00:23:05.290 rmmod nvme_keyring 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2586028 ']' 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2586028 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2586028 ']' 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2586028 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.290 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586028 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586028' 00:23:05.548 killing process with pid 2586028 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2586028 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2586028 00:23:05.548 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.549 17:17:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:08.095 00:23:08.095 real 0m16.227s 00:23:08.095 user 0m48.010s 00:23:08.095 sys 0m6.497s 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.095 ************************************ 00:23:08.095 END TEST nvmf_fio_host 00:23:08.095 ************************************ 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.095 ************************************ 00:23:08.095 START TEST nvmf_failover 00:23:08.095 ************************************ 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:08.095 * Looking for test storage... 00:23:08.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.095 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.096 --rc genhtml_branch_coverage=1 00:23:08.096 --rc genhtml_function_coverage=1 00:23:08.096 --rc genhtml_legend=1 00:23:08.096 --rc geninfo_all_blocks=1 00:23:08.096 --rc geninfo_unexecuted_blocks=1 00:23:08.096 00:23:08.096 ' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.096 --rc genhtml_branch_coverage=1 00:23:08.096 --rc genhtml_function_coverage=1 00:23:08.096 --rc genhtml_legend=1 00:23:08.096 --rc geninfo_all_blocks=1 00:23:08.096 --rc geninfo_unexecuted_blocks=1 00:23:08.096 00:23:08.096 ' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.096 --rc genhtml_branch_coverage=1 00:23:08.096 --rc genhtml_function_coverage=1 00:23:08.096 --rc genhtml_legend=1 00:23:08.096 --rc geninfo_all_blocks=1 00:23:08.096 --rc geninfo_unexecuted_blocks=1 00:23:08.096 00:23:08.096 ' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.096 --rc genhtml_branch_coverage=1 00:23:08.096 --rc genhtml_function_coverage=1 00:23:08.096 --rc genhtml_legend=1 00:23:08.096 --rc geninfo_all_blocks=1 00:23:08.096 --rc geninfo_unexecuted_blocks=1 00:23:08.096 00:23:08.096 ' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:08.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:08.096 17:17:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.714 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:14.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:14.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:14.715 Found net devices under 0000:86:00.0: cvl_0_0 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:14.715 Found net devices under 0000:86:00.1: cvl_0_1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:23:14.715 00:23:14.715 --- 10.0.0.2 ping statistics --- 00:23:14.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.715 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:14.715 00:23:14.715 --- 10.0.0.1 ping statistics --- 00:23:14.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.715 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2590957 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2590957 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2590957 ']' 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.715 17:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.715 [2024-11-20 17:17:31.914930] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:23:14.715 [2024-11-20 17:17:31.914974] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.715 [2024-11-20 17:17:31.991380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.715 [2024-11-20 17:17:32.036002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.715 [2024-11-20 17:17:32.036037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.715 [2024-11-20 17:17:32.036044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.715 [2024-11-20 17:17:32.036050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.715 [2024-11-20 17:17:32.036055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.715 [2024-11-20 17:17:32.037478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.715 [2024-11-20 17:17:32.037537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.715 [2024-11-20 17:17:32.037538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.715 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.715 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:14.715 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.715 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.715 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.973 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.973 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:14.973 [2024-11-20 17:17:32.952211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.973 17:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:15.231 Malloc0 00:23:15.231 17:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.489 17:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.748 17:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.748 [2024-11-20 17:17:33.769257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.006 17:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.006 [2024-11-20 17:17:33.965810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.006 17:17:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:16.264 [2024-11-20 17:17:34.154446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2591437 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2591437 /var/tmp/bdevperf.sock 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2591437 ']' 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.264 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.523 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.523 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:16.523 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:16.780 NVMe0n1 00:23:16.780 17:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.038 00:23:17.038 17:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2591590 00:23:17.038 17:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.038 17:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:18.412 17:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.413 [2024-11-20 17:17:36.255380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 [2024-11-20 17:17:36.255428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 [2024-11-20 17:17:36.255437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 [2024-11-20 17:17:36.255443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 [2024-11-20 17:17:36.255449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 [2024-11-20 17:17:36.255456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52d0 is same with the state(6) to be set 00:23:18.413 17:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:21.697 17:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:21.697 00:23:21.697 17:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.955 [2024-11-20 17:17:39.763959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.763993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 [2024-11-20 17:17:39.764158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5fa0 is same with the state(6) to be set 00:23:21.955 17:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:25.238 17:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.238 [2024-11-20 17:17:42.970907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.238 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:26.173 17:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:26.173 [2024-11-20 17:17:44.182576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 [2024-11-20 17:17:44.182752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ce0 is same with the state(6) to be set 00:23:26.173 17:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2591590 00:23:32.736 { 00:23:32.736 "results": [ 00:23:32.736 { 00:23:32.736 "job": "NVMe0n1", 00:23:32.736 "core_mask": "0x1", 00:23:32.736 "workload": "verify", 00:23:32.736 "status": "finished", 00:23:32.736 "verify_range": { 00:23:32.736 "start": 0, 00:23:32.736 "length": 16384 00:23:32.736 }, 00:23:32.736 "queue_depth": 128, 00:23:32.736 "io_size": 4096, 00:23:32.736 "runtime": 15.009082, 00:23:32.736 "iops": 11231.999398764028, 00:23:32.736 "mibps": 43.874997651421985, 00:23:32.736 "io_failed": 9781, 00:23:32.736 "io_timeout": 0, 00:23:32.736 "avg_latency_us": 10749.367010785654, 00:23:32.736 "min_latency_us": 415.45142857142855, 00:23:32.736 "max_latency_us": 15603.809523809523 00:23:32.736 } 00:23:32.736 ], 00:23:32.736 "core_count": 1 00:23:32.736 } 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2591437 ']' 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591437' 00:23:32.736 killing process with pid 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2591437 00:23:32.736 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:32.736 [2024-11-20 17:17:34.220020] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:23:32.736 [2024-11-20 17:17:34.220078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591437 ] 00:23:32.736 [2024-11-20 17:17:34.294897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.736 [2024-11-20 17:17:34.336257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.736 Running I/O for 15 seconds... 00:23:32.736 11254.00 IOPS, 43.96 MiB/s [2024-11-20T16:17:50.779Z] [2024-11-20 17:17:36.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.736 [2024-11-20 17:17:36.255905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.736 [2024-11-20 17:17:36.255920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.736 [2024-11-20 17:17:36.255934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.736 [2024-11-20 17:17:36.255942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.255949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.255957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.255963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.255977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.255984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.255992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.255999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.256015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.256029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.256044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.737 [2024-11-20 17:17:36.256285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.737 [2024-11-20 17:17:36.256530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.737 [2024-11-20 17:17:36.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.256991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.256998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.738 [2024-11-20 17:17:36.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.738 [2024-11-20 17:17:36.257110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.739 [2024-11-20 17:17:36.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.739 [2024-11-20 17:17:36.257697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.739 [2024-11-20 17:17:36.257704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.740 [2024-11-20 17:17:36.257709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:23:32.740 [2024-11-20 17:17:36.257716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:36.257759] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:32.740 [2024-11-20 17:17:36.257780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:36.257789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:36.257797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:36.257804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:36.257813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:36.257820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:36.257827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:36.257833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:36.257840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:32.740 [2024-11-20 17:17:36.260661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:32.740 [2024-11-20 17:17:36.260691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac340 (9): Bad file descriptor 00:23:32.740 [2024-11-20 17:17:36.408827] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:32.740 10477.50 IOPS, 40.93 MiB/s [2024-11-20T16:17:50.783Z] 10776.00 IOPS, 42.09 MiB/s [2024-11-20T16:17:50.783Z] 10956.00 IOPS, 42.80 MiB/s [2024-11-20T16:17:50.783Z] [2024-11-20 17:17:39.764471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:39.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:39.764521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:39.764539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.740 [2024-11-20 17:17:39.764552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeac340 is same with the state(6) to be set 00:23:32.740 [2024-11-20 17:17:39.764603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.740 [2024-11-20 17:17:39.764942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.740 [2024-11-20 17:17:39.764950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.764964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.764970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.764978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.764985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.764992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.741 [2024-11-20 17:17:39.765266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.741 [2024-11-20 17:17:39.765281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.741 [2024-11-20 17:17:39.765475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.741 [2024-11-20 17:17:39.765481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.742 [2024-11-20 17:17:39.765724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.765988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.765994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.766002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.766008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.766016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.766022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.766031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.766038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.742 [2024-11-20 17:17:39.766045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.742 [2024-11-20 17:17:39.766052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:39.766442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.743 [2024-11-20 17:17:39.766472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.743 [2024-11-20 17:17:39.766479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69912 len:8 PRP1 0x0 PRP2 0x0 00:23:32.743 [2024-11-20 17:17:39.766485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:39.766527] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:32.743 [2024-11-20 17:17:39.766536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:32.743 [2024-11-20 17:17:39.769311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:32.743 [2024-11-20 17:17:39.769338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac340 (9): Bad file descriptor 00:23:32.743 [2024-11-20 17:17:39.791494] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:32.743 10964.20 IOPS, 42.83 MiB/s [2024-11-20T16:17:50.786Z] 11043.17 IOPS, 43.14 MiB/s [2024-11-20T16:17:50.786Z] 11122.29 IOPS, 43.45 MiB/s [2024-11-20T16:17:50.786Z] 11152.00 IOPS, 43.56 MiB/s [2024-11-20T16:17:50.786Z] [2024-11-20 17:17:44.183329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.743 [2024-11-20 17:17:44.183472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.743 [2024-11-20 17:17:44.183481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.183986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.183993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.184001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.184008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.184015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.184022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.744 [2024-11-20 17:17:44.184035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.744 [2024-11-20 17:17:44.184043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.745 [2024-11-20 17:17:44.184064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.745 [2024-11-20 17:17:44.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:23:32.745 [2024-11-20 17:17:44.184377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.745 [2024-11-20 17:17:44.184422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.745 [2024-11-20 17:17:44.184437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.745 [2024-11-20 17:17:44.184450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.745 [2024-11-20 17:17:44.184464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeac340 is same with the state(6) to be set 00:23:32.745 [2024-11-20 17:17:44.184589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.745 [2024-11-20 17:17:44.184596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88600 len:8 PRP1 0x0 PRP2 0x0 00:23:32.745 [2024-11-20 17:17:44.184609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.745 [2024-11-20 17:17:44.184631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88608 len:8 PRP1 0x0 PRP2 0x0 00:23:32.745 [2024-11-20 17:17:44.184642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.745 [2024-11-20 17:17:44.184654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:23:32.745 [2024-11-20 17:17:44.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.745 [2024-11-20 17:17:44.184677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88624 len:8 PRP1 0x0 PRP2 0x0 00:23:32.745 [2024-11-20 17:17:44.184688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.745 [2024-11-20 17:17:44.184694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.745 [2024-11-20 17:17:44.184699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.745 [2024-11-20 17:17:44.184704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88672 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88680 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88688 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88696 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88704 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88712 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88720 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.184977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.184983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.184988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88728 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.184994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88752 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88808 len:8 PRP1 0x0 PRP2 0x0 00:23:32.746 [2024-11-20 17:17:44.185225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.746 [2024-11-20 17:17:44.185231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.746 [2024-11-20 17:17:44.185236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.746 [2024-11-20 17:17:44.185241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88816 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87888 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87896 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87904 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87912 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87920 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87928 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88824 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87936 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87944 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87960 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87968 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.747 [2024-11-20 17:17:44.185747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87976 len:8 PRP1 0x0 PRP2 0x0 00:23:32.747 [2024-11-20 17:17:44.185753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.747 [2024-11-20 17:17:44.185760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.747 [2024-11-20 17:17:44.185764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87984 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87992 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88000 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88008 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.185982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.185988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.185993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.185998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.186004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.186011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.186015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.186021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:23:32.748 [2024-11-20 17:17:44.190749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.748 [2024-11-20 17:17:44.190755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.748 [2024-11-20 17:17:44.190760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.748 [2024-11-20 17:17:44.190765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88176 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88184 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88208 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88224 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88232 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88240 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.190980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.190985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.190991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88248 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.190997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88256 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88264 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88288 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88296 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.749 [2024-11-20 17:17:44.191276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.749 [2024-11-20 17:17:44.191281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.749 [2024-11-20 17:17:44.191286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:23:32.749 [2024-11-20 17:17:44.191292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88472 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.750 [2024-11-20 17:17:44.191730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.750 [2024-11-20 17:17:44.191734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.750 [2024-11-20 17:17:44.191739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:23:32.750 [2024-11-20 17:17:44.191745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.191980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.751 [2024-11-20 17:17:44.191984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.751 [2024-11-20 17:17:44.191990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:23:32.751 [2024-11-20 17:17:44.191996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.751 [2024-11-20 17:17:44.192038] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:32.751 [2024-11-20 17:17:44.192048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:32.751 [2024-11-20 17:17:44.196117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:32.751 11191.33 IOPS, 43.72 MiB/s [2024-11-20T16:17:50.794Z] [2024-11-20 17:17:44.196153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac340 (9): Bad file descriptor 00:23:32.751 [2024-11-20 17:17:44.218263] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:32.751 11174.00 IOPS, 43.65 MiB/s [2024-11-20T16:17:50.794Z] 11195.18 IOPS, 43.73 MiB/s [2024-11-20T16:17:50.794Z] 11223.92 IOPS, 43.84 MiB/s [2024-11-20T16:17:50.794Z] 11231.77 IOPS, 43.87 MiB/s [2024-11-20T16:17:50.794Z] 11233.71 IOPS, 43.88 MiB/s 00:23:32.751 Latency(us) 00:23:32.751 [2024-11-20T16:17:50.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.751 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:32.751 Verification LBA range: start 0x0 length 0x4000 00:23:32.751 NVMe0n1 : 15.01 11232.00 43.87 651.67 0.00 10749.37 415.45 15603.81 00:23:32.751 [2024-11-20T16:17:50.794Z] =================================================================================================================== 00:23:32.751 [2024-11-20T16:17:50.794Z] Total : 11232.00 43.87 651.67 0.00 10749.37 415.45 15603.81 00:23:32.751 Received shutdown signal, test time was about 15.000000 seconds 00:23:32.751 00:23:32.751 Latency(us) 00:23:32.751 [2024-11-20T16:17:50.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.751 [2024-11-20T16:17:50.794Z] =================================================================================================================== 00:23:32.751 [2024-11-20T16:17:50.794Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2593984 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2593984 /var/tmp/bdevperf.sock 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2593984 ']' 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:32.751 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.009 [2024-11-20 17:17:50.890950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.009 17:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:33.268 [2024-11-20 17:17:51.087494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:33.268 17:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.526 NVMe0n1 00:23:33.526 17:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.783 00:23:33.783 17:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:34.041 00:23:34.041 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.041 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:34.299 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.557 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:37.838 17:17:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.838 17:17:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:37.838 17:17:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.838 17:17:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2594895 00:23:37.838 17:17:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2594895 00:23:38.772 { 00:23:38.772 "results": [ 00:23:38.772 { 00:23:38.772 "job": "NVMe0n1", 00:23:38.772 "core_mask": "0x1", 00:23:38.772 "workload": "verify", 00:23:38.772 "status": "finished", 00:23:38.772 "verify_range": { 00:23:38.772 "start": 0, 00:23:38.772 "length": 16384 00:23:38.772 }, 00:23:38.772 "queue_depth": 128, 00:23:38.772 "io_size": 4096, 00:23:38.772 "runtime": 1.007867, 00:23:38.772 "iops": 11309.03184646387, 00:23:38.772 "mibps": 44.17590565024949, 00:23:38.772 "io_failed": 0, 00:23:38.772 "io_timeout": 0, 00:23:38.772 "avg_latency_us": 11265.182127524462, 00:23:38.772 "min_latency_us": 1989.4857142857143, 00:23:38.772 "max_latency_us": 10610.590476190477 00:23:38.772 } 00:23:38.772 ], 00:23:38.772 "core_count": 1 00:23:38.773 } 00:23:38.773 17:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.773 [2024-11-20 17:17:50.503690] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:23:38.773 [2024-11-20 17:17:50.503746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593984 ] 00:23:38.773 [2024-11-20 17:17:50.579723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.773 [2024-11-20 17:17:50.617247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.773 [2024-11-20 17:17:52.425957] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:38.773 [2024-11-20 17:17:52.426003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.773 [2024-11-20 17:17:52.426013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.773 [2024-11-20 17:17:52.426022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.773 [2024-11-20 17:17:52.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.773 [2024-11-20 17:17:52.426036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.773 [2024-11-20 17:17:52.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.773 [2024-11-20 17:17:52.426051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.773 [2024-11-20 17:17:52.426057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.773 [2024-11-20 17:17:52.426063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:38.773 [2024-11-20 17:17:52.426088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:38.773 [2024-11-20 17:17:52.426102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d340 (9): Bad file descriptor 00:23:38.773 [2024-11-20 17:17:52.431999] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:38.773 Running I/O for 1 seconds... 00:23:38.773 11270.00 IOPS, 44.02 MiB/s 00:23:38.773 Latency(us) 00:23:38.773 [2024-11-20T16:17:56.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:38.773 Verification LBA range: start 0x0 length 0x4000 00:23:38.773 NVMe0n1 : 1.01 11309.03 44.18 0.00 0.00 11265.18 1989.49 10610.59 00:23:38.773 [2024-11-20T16:17:56.816Z] =================================================================================================================== 00:23:38.773 [2024-11-20T16:17:56.816Z] Total : 11309.03 44.18 0.00 0.00 11265.18 1989.49 10610.59 00:23:38.773 17:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.773 17:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:39.031 17:17:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.289 17:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.289 17:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:39.547 17:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:39.547 17:17:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2593984 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2593984 ']' 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2593984 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593984 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593984' 00:23:42.828 killing process with pid 2593984 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2593984 00:23:42.828 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2593984 00:23:43.086 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:43.086 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.344 rmmod nvme_tcp 00:23:43.344 rmmod nvme_fabrics 00:23:43.344 rmmod nvme_keyring 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2590957 ']' 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2590957 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2590957 ']' 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2590957 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590957 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590957' 00:23:43.344 killing process with pid 2590957 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2590957 00:23:43.344 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2590957 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.603 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.139 00:23:46.139 real 0m37.863s 00:23:46.139 user 1m59.622s 00:23:46.139 sys 0m7.990s 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:46.139 ************************************ 00:23:46.139 END TEST nvmf_failover 00:23:46.139 ************************************ 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.139 ************************************ 00:23:46.139 START TEST nvmf_host_discovery 00:23:46.139 ************************************ 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:46.139 * Looking for test storage... 00:23:46.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.139 --rc genhtml_branch_coverage=1 00:23:46.139 --rc genhtml_function_coverage=1 00:23:46.139 --rc genhtml_legend=1 00:23:46.139 --rc geninfo_all_blocks=1 00:23:46.139 --rc geninfo_unexecuted_blocks=1 00:23:46.139 00:23:46.139 ' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.139 --rc genhtml_branch_coverage=1 00:23:46.139 --rc genhtml_function_coverage=1 00:23:46.139 --rc genhtml_legend=1 00:23:46.139 --rc geninfo_all_blocks=1 00:23:46.139 --rc geninfo_unexecuted_blocks=1 00:23:46.139 00:23:46.139 ' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.139 --rc genhtml_branch_coverage=1 00:23:46.139 --rc genhtml_function_coverage=1 00:23:46.139 --rc genhtml_legend=1 00:23:46.139 --rc geninfo_all_blocks=1 00:23:46.139 --rc geninfo_unexecuted_blocks=1 00:23:46.139 00:23:46.139 ' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.139 --rc genhtml_branch_coverage=1 00:23:46.139 --rc genhtml_function_coverage=1 00:23:46.139 --rc genhtml_legend=1 00:23:46.139 --rc geninfo_all_blocks=1 00:23:46.139 --rc geninfo_unexecuted_blocks=1 00:23:46.139 00:23:46.139 ' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.139 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.140 17:18:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.710 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.710 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.710 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:23:52.711 00:23:52.711 --- 10.0.0.2 ping statistics --- 00:23:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.711 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:23:52.711 00:23:52.711 --- 10.0.0.1 ping statistics --- 00:23:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.711 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2599856 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2599856 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2599856 ']' 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.711 17:18:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 [2024-11-20 17:18:09.823468] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:23:52.711 [2024-11-20 17:18:09.823513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.711 [2024-11-20 17:18:09.901736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.711 [2024-11-20 17:18:09.942801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.711 [2024-11-20 17:18:09.942837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.711 [2024-11-20 17:18:09.942844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.711 [2024-11-20 17:18:09.942850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.711 [2024-11-20 17:18:09.942855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.711 [2024-11-20 17:18:09.943409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 [2024-11-20 17:18:10.713502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 [2024-11-20 17:18:10.725705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 null0 00:23:52.711 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.712 null1 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.712 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2600045 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2600045 /tmp/host.sock 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2600045 ']' 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:52.971 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.971 17:18:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.971 [2024-11-20 17:18:10.803293] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:23:52.971 [2024-11-20 17:18:10.803339] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600045 ] 00:23:52.971 [2024-11-20 17:18:10.881455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.972 [2024-11-20 17:18:10.923068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.231 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.232 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.233 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.234 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.234 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.234 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.492 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 [2024-11-20 17:18:11.339266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.493 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.753 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:53.753 17:18:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:54.321 [2024-11-20 17:18:12.081709] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:54.321 [2024-11-20 17:18:12.081731] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:54.321 [2024-11-20 17:18:12.081743] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.321 [2024-11-20 17:18:12.167997] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:54.321 [2024-11-20 17:18:12.262909] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:54.321 [2024-11-20 17:18:12.263681] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9f7df0:1 started. 00:23:54.321 [2024-11-20 17:18:12.265101] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:54.321 [2024-11-20 17:18:12.265118] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:54.321 [2024-11-20 17:18:12.310901] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9f7df0 was disconnected and freed. delete nvme_qpair. 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.580 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.839 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.839 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:54.839 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.840 [2024-11-20 17:18:12.735803] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9c6620:1 started. 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.840 [2024-11-20 17:18:12.741716] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9c6620 was disconnected and freed. delete nvme_qpair. 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 [2024-11-20 17:18:12.831279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.840 [2024-11-20 17:18:12.832214] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.840 [2024-11-20 17:18:12.832254] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.840 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.134 [2024-11-20 17:18:12.920833] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:55.134 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:55.476 [2024-11-20 17:18:13.232337] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:55.476 [2024-11-20 17:18:13.232375] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.476 [2024-11-20 17:18:13.232384] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:55.476 [2024-11-20 17:18:13.232388] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.045 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.045 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.306 [2024-11-20 17:18:14.087520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.306 [2024-11-20 17:18:14.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.306 [2024-11-20 17:18:14.087558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.306 [2024-11-20 17:18:14.087565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.306 [2024-11-20 17:18:14.087573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.306 [2024-11-20 17:18:14.087580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.306 [2024-11-20 17:18:14.087587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.306 [2024-11-20 17:18:14.087593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.306 [2024-11-20 17:18:14.087600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.306 [2024-11-20 17:18:14.087651] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:56.306 [2024-11-20 17:18:14.087666] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.306 [2024-11-20 17:18:14.097527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.306 [2024-11-20 17:18:14.107561] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.306 [2024-11-20 17:18:14.107576] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.306 [2024-11-20 17:18:14.107581] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.107586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.306 [2024-11-20 17:18:14.107604] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.107884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.306 [2024-11-20 17:18:14.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.306 [2024-11-20 17:18:14.107907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.306 [2024-11-20 17:18:14.107919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.306 [2024-11-20 17:18:14.107936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.306 [2024-11-20 17:18:14.107944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.306 [2024-11-20 17:18:14.107952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.306 [2024-11-20 17:18:14.107958] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.306 [2024-11-20 17:18:14.107963] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.306 [2024-11-20 17:18:14.107967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.306 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.306 [2024-11-20 17:18:14.117636] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.306 [2024-11-20 17:18:14.117646] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.306 [2024-11-20 17:18:14.117650] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.117654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.306 [2024-11-20 17:18:14.117667] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.117946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.306 [2024-11-20 17:18:14.117961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.306 [2024-11-20 17:18:14.117969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.306 [2024-11-20 17:18:14.117981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.306 [2024-11-20 17:18:14.118000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.306 [2024-11-20 17:18:14.118007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.306 [2024-11-20 17:18:14.118014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.306 [2024-11-20 17:18:14.118020] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.306 [2024-11-20 17:18:14.118024] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.306 [2024-11-20 17:18:14.118028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.306 [2024-11-20 17:18:14.127699] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.306 [2024-11-20 17:18:14.127710] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.306 [2024-11-20 17:18:14.127715] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.127718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.306 [2024-11-20 17:18:14.127731] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.306 [2024-11-20 17:18:14.127959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.306 [2024-11-20 17:18:14.127972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.306 [2024-11-20 17:18:14.127979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.306 [2024-11-20 17:18:14.127990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.306 [2024-11-20 17:18:14.127999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.306 [2024-11-20 17:18:14.128005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.306 [2024-11-20 17:18:14.128013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.306 [2024-11-20 17:18:14.128018] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.306 [2024-11-20 17:18:14.128023] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.307 [2024-11-20 17:18:14.128026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.307 [2024-11-20 17:18:14.137763] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.307 [2024-11-20 17:18:14.137778] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.307 [2024-11-20 17:18:14.137782] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.137785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.307 [2024-11-20 17:18:14.137799] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.138041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.307 [2024-11-20 17:18:14.138055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.307 [2024-11-20 17:18:14.138065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.307 [2024-11-20 17:18:14.138080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.307 [2024-11-20 17:18:14.138097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.307 [2024-11-20 17:18:14.138105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.307 [2024-11-20 17:18:14.138113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.307 [2024-11-20 17:18:14.138119] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.307 [2024-11-20 17:18:14.138123] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.307 [2024-11-20 17:18:14.138127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.307 [2024-11-20 17:18:14.147831] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.307 [2024-11-20 17:18:14.147845] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.307 [2024-11-20 17:18:14.147850] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.147855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.307 [2024-11-20 17:18:14.147870] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.307 [2024-11-20 17:18:14.148069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.307 [2024-11-20 17:18:14.148088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.307 [2024-11-20 17:18:14.148097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.307 [2024-11-20 17:18:14.148107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.307 [2024-11-20 17:18:14.148121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.307 [2024-11-20 17:18:14.148127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.307 [2024-11-20 17:18:14.148134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.307 [2024-11-20 17:18:14.148146] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.307 [2024-11-20 17:18:14.148156] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.307 [2024-11-20 17:18:14.148161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.307 [2024-11-20 17:18:14.157902] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.307 [2024-11-20 17:18:14.157918] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.307 [2024-11-20 17:18:14.157922] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.157926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.307 [2024-11-20 17:18:14.157939] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.158190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.307 [2024-11-20 17:18:14.158206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.307 [2024-11-20 17:18:14.158214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.307 [2024-11-20 17:18:14.158224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.307 [2024-11-20 17:18:14.158233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.307 [2024-11-20 17:18:14.158239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.307 [2024-11-20 17:18:14.158246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.307 [2024-11-20 17:18:14.158252] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.307 [2024-11-20 17:18:14.158256] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.307 [2024-11-20 17:18:14.158260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.307 [2024-11-20 17:18:14.167969] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.307 [2024-11-20 17:18:14.167979] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.307 [2024-11-20 17:18:14.167983] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.167987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.307 [2024-11-20 17:18:14.167998] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.307 [2024-11-20 17:18:14.168165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.307 [2024-11-20 17:18:14.168176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8390 with addr=10.0.0.2, port=4420 00:23:56.307 [2024-11-20 17:18:14.168182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8390 is same with the state(6) to be set 00:23:56.307 [2024-11-20 17:18:14.168192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c8390 (9): Bad file descriptor 00:23:56.307 [2024-11-20 17:18:14.168206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.307 [2024-11-20 17:18:14.168219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.307 [2024-11-20 17:18:14.168226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.307 [2024-11-20 17:18:14.168231] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.307 [2024-11-20 17:18:14.168236] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.307 [2024-11-20 17:18:14.168239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.307 [2024-11-20 17:18:14.173434] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:56.307 [2024-11-20 17:18:14.173450] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.307 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:56.308 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.566 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.567 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 [2024-11-20 17:18:15.459956] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:57.501 [2024-11-20 17:18:15.459981] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:57.501 [2024-11-20 17:18:15.459993] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.759 [2024-11-20 17:18:15.547277] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:58.017 [2024-11-20 17:18:15.858740] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:58.017 [2024-11-20 17:18:15.859391] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x9c5a10:1 started. 00:23:58.017 [2024-11-20 17:18:15.861021] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.017 [2024-11-20 17:18:15.861052] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.017 [2024-11-20 17:18:15.869652] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x9c5a10 was disconnected and freed. delete nvme_qpair. 00:23:58.017 request: 00:23:58.017 { 00:23:58.017 "name": "nvme", 00:23:58.017 "trtype": "tcp", 00:23:58.017 "traddr": "10.0.0.2", 00:23:58.017 "adrfam": "ipv4", 00:23:58.017 "trsvcid": "8009", 00:23:58.017 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.017 "wait_for_attach": true, 00:23:58.017 "method": "bdev_nvme_start_discovery", 00:23:58.017 "req_id": 1 00:23:58.017 } 00:23:58.017 Got JSON-RPC error response 00:23:58.017 response: 00:23:58.017 { 00:23:58.017 "code": -17, 00:23:58.017 "message": "File exists" 00:23:58.017 } 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:58.017 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 request: 00:23:58.018 { 00:23:58.018 "name": "nvme_second", 00:23:58.018 "trtype": "tcp", 00:23:58.018 "traddr": "10.0.0.2", 00:23:58.018 "adrfam": "ipv4", 00:23:58.018 "trsvcid": "8009", 00:23:58.018 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.018 "wait_for_attach": true, 00:23:58.018 "method": "bdev_nvme_start_discovery", 00:23:58.018 "req_id": 1 00:23:58.018 } 00:23:58.018 Got JSON-RPC error response 00:23:58.018 response: 00:23:58.018 { 00:23:58.018 "code": -17, 00:23:58.018 "message": "File exists" 00:23:58.018 } 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.018 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.276 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.210 [2024-11-20 17:18:17.100427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.210 [2024-11-20 17:18:17.100474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9deb90 with addr=10.0.0.2, port=8010 00:23:59.210 [2024-11-20 17:18:17.100508] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:59.210 [2024-11-20 17:18:17.100515] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:59.210 [2024-11-20 17:18:17.100522] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:00.145 [2024-11-20 17:18:18.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.145 [2024-11-20 17:18:18.102969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c8020 with addr=10.0.0.2, port=8010 00:24:00.145 [2024-11-20 17:18:18.102986] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:00.145 [2024-11-20 17:18:18.102993] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:00.145 [2024-11-20 17:18:18.103000] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:01.080 [2024-11-20 17:18:19.105100] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:01.080 request: 00:24:01.080 { 00:24:01.080 "name": "nvme_second", 00:24:01.080 "trtype": "tcp", 00:24:01.080 "traddr": "10.0.0.2", 00:24:01.080 "adrfam": "ipv4", 00:24:01.080 "trsvcid": "8010", 00:24:01.080 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:01.080 "wait_for_attach": false, 00:24:01.080 "attach_timeout_ms": 3000, 00:24:01.080 "method": "bdev_nvme_start_discovery", 00:24:01.080 "req_id": 1 00:24:01.080 } 00:24:01.080 Got JSON-RPC error response 00:24:01.080 response: 00:24:01.080 { 00:24:01.080 "code": -110, 00:24:01.080 "message": "Connection timed out" 00:24:01.080 } 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.080 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2600045 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.352 rmmod nvme_tcp 00:24:01.352 rmmod nvme_fabrics 00:24:01.352 rmmod nvme_keyring 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2599856 ']' 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2599856 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2599856 ']' 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2599856 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599856 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599856' 00:24:01.352 killing process with pid 2599856 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2599856 00:24:01.352 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2599856 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.615 17:18:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.519 00:24:03.519 real 0m17.877s 00:24:03.519 user 0m21.405s 00:24:03.519 sys 0m5.833s 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.519 ************************************ 00:24:03.519 END TEST nvmf_host_discovery 00:24:03.519 ************************************ 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.519 17:18:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.779 ************************************ 00:24:03.779 START TEST nvmf_host_multipath_status 00:24:03.779 ************************************ 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:03.779 * Looking for test storage... 00:24:03.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.779 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:03.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.780 --rc genhtml_branch_coverage=1 00:24:03.780 --rc genhtml_function_coverage=1 00:24:03.780 --rc genhtml_legend=1 00:24:03.780 --rc geninfo_all_blocks=1 00:24:03.780 --rc geninfo_unexecuted_blocks=1 00:24:03.780 00:24:03.780 ' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:03.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.780 --rc genhtml_branch_coverage=1 00:24:03.780 --rc genhtml_function_coverage=1 00:24:03.780 --rc genhtml_legend=1 00:24:03.780 --rc geninfo_all_blocks=1 00:24:03.780 --rc geninfo_unexecuted_blocks=1 00:24:03.780 00:24:03.780 ' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:03.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.780 --rc genhtml_branch_coverage=1 00:24:03.780 --rc genhtml_function_coverage=1 00:24:03.780 --rc genhtml_legend=1 00:24:03.780 --rc geninfo_all_blocks=1 00:24:03.780 --rc geninfo_unexecuted_blocks=1 00:24:03.780 00:24:03.780 ' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:03.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.780 --rc genhtml_branch_coverage=1 00:24:03.780 --rc genhtml_function_coverage=1 00:24:03.780 --rc genhtml_legend=1 00:24:03.780 --rc geninfo_all_blocks=1 00:24:03.780 --rc geninfo_unexecuted_blocks=1 00:24:03.780 00:24:03.780 ' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.780 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.781 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:10.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:10.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:10.348 Found net devices under 0000:86:00.0: cvl_0_0 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:10.348 Found net devices under 0000:86:00.1: cvl_0_1 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.348 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:24:10.349 00:24:10.349 --- 10.0.0.2 ping statistics --- 00:24:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.349 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:10.349 00:24:10.349 --- 10.0.0.1 ping statistics --- 00:24:10.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.349 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2605006 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2605006 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2605006 ']' 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.349 17:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.349 [2024-11-20 17:18:27.804625] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:24:10.349 [2024-11-20 17:18:27.804676] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.349 [2024-11-20 17:18:27.885886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.349 [2024-11-20 17:18:27.925550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.349 [2024-11-20 17:18:27.925586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.349 [2024-11-20 17:18:27.925593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.349 [2024-11-20 17:18:27.925599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.349 [2024-11-20 17:18:27.925604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.349 [2024-11-20 17:18:27.926818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.349 [2024-11-20 17:18:27.926819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2605006 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.349 [2024-11-20 17:18:28.232528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.349 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.608 Malloc0 00:24:10.608 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:10.866 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.866 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.125 [2024-11-20 17:18:29.026972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.125 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.383 [2024-11-20 17:18:29.223478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2605303 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2605303 /var/tmp/bdevperf.sock 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2605303 ']' 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.383 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.641 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.641 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:11.641 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:11.899 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:12.157 Nvme0n1 00:24:12.157 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:12.723 Nvme0n1 00:24:12.723 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:12.723 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:14.623 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:14.623 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.881 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.881 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:16.255 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:16.255 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.255 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.255 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.255 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.255 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.255 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.255 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.514 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.514 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.514 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.514 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.772 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.028 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.028 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.028 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.028 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.285 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.285 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:17.285 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.574 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.945 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.203 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.203 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.203 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.203 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.461 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.461 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.461 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.461 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.719 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.719 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.719 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.719 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.976 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.976 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:19.976 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.976 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:20.234 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.608 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.867 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.867 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.867 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.867 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.124 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.124 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.124 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.124 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.382 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.382 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.382 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.382 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.639 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.639 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:22.639 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.639 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:22.897 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:24.269 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:24.269 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.269 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.269 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.269 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.269 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:24.269 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.269 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.527 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.785 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.785 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.785 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.785 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.042 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.042 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:25.042 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.042 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.300 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.300 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:25.300 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:25.557 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.557 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:26.555 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:26.555 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.556 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.556 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.813 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.813 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.813 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.813 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.070 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.070 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.070 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.070 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.328 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.585 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.843 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.843 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:27.843 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:28.100 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.100 17:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.472 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.731 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.989 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.989 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:29.989 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.989 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.248 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.248 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.248 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.248 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.506 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.506 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:30.764 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:30.764 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:30.764 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.022 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.397 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.655 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.655 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.655 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.655 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.912 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.912 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.912 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.912 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.170 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.170 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.170 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.170 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.428 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.428 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:33.428 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.686 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.687 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.060 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.318 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.576 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.576 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.576 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.576 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.834 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.834 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.834 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.834 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.092 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.092 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:36.092 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.350 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:36.350 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.723 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.980 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.980 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.981 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.238 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.238 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.238 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.238 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.496 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.496 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.496 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.496 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.754 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.754 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:38.754 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:39.012 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:39.270 17:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:40.203 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:40.203 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.203 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.203 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.460 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.460 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.460 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.460 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.718 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.719 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.982 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.982 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.982 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.982 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.243 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.243 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:41.243 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.243 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2605303 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2605303 ']' 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2605303 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2605303 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2605303' 00:24:41.501 killing process with pid 2605303 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2605303 00:24:41.501 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2605303 00:24:41.501 { 00:24:41.501 "results": [ 00:24:41.501 { 00:24:41.501 "job": "Nvme0n1", 00:24:41.501 "core_mask": "0x4", 00:24:41.501 "workload": "verify", 00:24:41.501 "status": "terminated", 00:24:41.501 "verify_range": { 00:24:41.501 "start": 0, 00:24:41.501 "length": 16384 00:24:41.501 }, 00:24:41.501 "queue_depth": 128, 00:24:41.501 "io_size": 4096, 00:24:41.501 "runtime": 28.784089, 00:24:41.501 "iops": 10842.378926774441, 00:24:41.501 "mibps": 42.35304268271266, 00:24:41.501 "io_failed": 0, 00:24:41.501 "io_timeout": 0, 00:24:41.501 "avg_latency_us": 11786.376307341887, 00:24:41.501 "min_latency_us": 998.6438095238095, 00:24:41.501 "max_latency_us": 3019898.88 00:24:41.501 } 00:24:41.501 ], 00:24:41.501 "core_count": 1 00:24:41.501 } 00:24:41.763 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2605303 00:24:41.763 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.763 [2024-11-20 17:18:29.300415] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:24:41.763 [2024-11-20 17:18:29.300470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605303 ] 00:24:41.763 [2024-11-20 17:18:29.374284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.763 [2024-11-20 17:18:29.413814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.763 Running I/O for 90 seconds... 00:24:41.763 11629.00 IOPS, 45.43 MiB/s [2024-11-20T16:18:59.806Z] 11631.00 IOPS, 45.43 MiB/s [2024-11-20T16:18:59.806Z] 11647.00 IOPS, 45.50 MiB/s [2024-11-20T16:18:59.806Z] 11686.50 IOPS, 45.65 MiB/s [2024-11-20T16:18:59.806Z] 11695.80 IOPS, 45.69 MiB/s [2024-11-20T16:18:59.806Z] 11700.50 IOPS, 45.71 MiB/s [2024-11-20T16:18:59.806Z] 11691.57 IOPS, 45.67 MiB/s [2024-11-20T16:18:59.806Z] 11676.88 IOPS, 45.61 MiB/s [2024-11-20T16:18:59.806Z] 11671.22 IOPS, 45.59 MiB/s [2024-11-20T16:18:59.806Z] 11663.00 IOPS, 45.56 MiB/s [2024-11-20T16:18:59.806Z] 11674.36 IOPS, 45.60 MiB/s [2024-11-20T16:18:59.806Z] 11649.75 IOPS, 45.51 MiB/s [2024-11-20T16:18:59.806Z] [2024-11-20 17:18:43.325225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.763 [2024-11-20 17:18:43.325552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.763 [2024-11-20 17:18:43.325559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.325790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.325953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.764 [2024-11-20 17:18:43.325960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.326991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.327012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.327018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.327070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.327079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.327094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.327100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.327115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.327122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.764 [2024-11-20 17:18:43.327136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.764 [2024-11-20 17:18:43.327143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.765 [2024-11-20 17:18:43.327164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.765 [2024-11-20 17:18:43.327185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.765 [2024-11-20 17:18:43.327211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.327542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.327996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.765 [2024-11-20 17:18:43.328004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.765 [2024-11-20 17:18:43.328474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.765 [2024-11-20 17:18:43.328491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.328969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.328986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.328993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:43.329130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:43.329321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.766 [2024-11-20 17:18:43.329330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.766 11377.92 IOPS, 44.45 MiB/s [2024-11-20T16:18:59.809Z] 10565.21 IOPS, 41.27 MiB/s [2024-11-20T16:18:59.809Z] 9860.87 IOPS, 38.52 MiB/s [2024-11-20T16:18:59.809Z] 9473.12 IOPS, 37.00 MiB/s [2024-11-20T16:18:59.809Z] 9611.76 IOPS, 37.55 MiB/s [2024-11-20T16:18:59.809Z] 9725.00 IOPS, 37.99 MiB/s [2024-11-20T16:18:59.809Z] 9933.32 IOPS, 38.80 MiB/s [2024-11-20T16:18:59.809Z] 10112.65 IOPS, 39.50 MiB/s [2024-11-20T16:18:59.809Z] 10268.38 IOPS, 40.11 MiB/s [2024-11-20T16:18:59.809Z] 10342.95 IOPS, 40.40 MiB/s [2024-11-20T16:18:59.809Z] 10398.83 IOPS, 40.62 MiB/s [2024-11-20T16:18:59.809Z] 10468.92 IOPS, 40.89 MiB/s [2024-11-20T16:18:59.809Z] 10609.88 IOPS, 41.44 MiB/s [2024-11-20T16:18:59.809Z] 10720.73 IOPS, 41.88 MiB/s [2024-11-20T16:18:59.809Z] [2024-11-20 17:18:57.060536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.766 [2024-11-20 17:18:57.060575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.766 [2024-11-20 17:18:57.060610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.060982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.060989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.061001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.061007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.061018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.767 [2024-11-20 17:18:57.061026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.061039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.767 [2024-11-20 17:18:57.061046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.767 [2024-11-20 17:18:57.062334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.767 [2024-11-20 17:18:57.062357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.767 [2024-11-20 17:18:57.062378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.767 [2024-11-20 17:18:57.062487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.767 [2024-11-20 17:18:57.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.062599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.062617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.062635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.062654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.062667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.062674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.063335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.063355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.063374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.768 [2024-11-20 17:18:57.063392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.768 [2024-11-20 17:18:57.063404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.768 [2024-11-20 17:18:57.063410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.768 10797.19 IOPS, 42.18 MiB/s [2024-11-20T16:18:59.811Z] 10829.54 IOPS, 42.30 MiB/s [2024-11-20T16:18:59.811Z] Received shutdown signal, test time was about 28.784717 seconds 00:24:41.768 00:24:41.768 Latency(us) 00:24:41.768 [2024-11-20T16:18:59.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.768 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:41.768 Verification LBA range: start 0x0 length 0x4000 00:24:41.768 Nvme0n1 : 28.78 10842.38 42.35 0.00 0.00 11786.38 998.64 3019898.88 00:24:41.768 [2024-11-20T16:18:59.811Z] =================================================================================================================== 00:24:41.768 [2024-11-20T16:18:59.811Z] Total : 10842.38 42.35 0.00 0.00 11786.38 998.64 3019898.88 00:24:41.768 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.026 rmmod nvme_tcp 00:24:42.026 rmmod nvme_fabrics 00:24:42.026 rmmod nvme_keyring 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2605006 ']' 00:24:42.026 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2605006 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2605006 ']' 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2605006 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2605006 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2605006' 00:24:42.027 killing process with pid 2605006 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2605006 00:24:42.027 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2605006 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.285 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.190 00:24:44.190 real 0m40.592s 00:24:44.190 user 1m49.925s 00:24:44.190 sys 0m11.678s 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.190 ************************************ 00:24:44.190 END TEST nvmf_host_multipath_status 00:24:44.190 ************************************ 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.190 17:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.450 ************************************ 00:24:44.450 START TEST nvmf_discovery_remove_ifc 00:24:44.450 ************************************ 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:44.450 * Looking for test storage... 00:24:44.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.450 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.451 --rc genhtml_branch_coverage=1 00:24:44.451 --rc genhtml_function_coverage=1 00:24:44.451 --rc genhtml_legend=1 00:24:44.451 --rc geninfo_all_blocks=1 00:24:44.451 --rc geninfo_unexecuted_blocks=1 00:24:44.451 00:24:44.451 ' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.451 --rc genhtml_branch_coverage=1 00:24:44.451 --rc genhtml_function_coverage=1 00:24:44.451 --rc genhtml_legend=1 00:24:44.451 --rc geninfo_all_blocks=1 00:24:44.451 --rc geninfo_unexecuted_blocks=1 00:24:44.451 00:24:44.451 ' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.451 --rc genhtml_branch_coverage=1 00:24:44.451 --rc genhtml_function_coverage=1 00:24:44.451 --rc genhtml_legend=1 00:24:44.451 --rc geninfo_all_blocks=1 00:24:44.451 --rc geninfo_unexecuted_blocks=1 00:24:44.451 00:24:44.451 ' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.451 --rc genhtml_branch_coverage=1 00:24:44.451 --rc genhtml_function_coverage=1 00:24:44.451 --rc genhtml_legend=1 00:24:44.451 --rc geninfo_all_blocks=1 00:24:44.451 --rc geninfo_unexecuted_blocks=1 00:24:44.451 00:24:44.451 ' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.451 17:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.018 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:51.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:51.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:51.019 Found net devices under 0000:86:00.0: cvl_0_0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:51.019 Found net devices under 0000:86:00.1: cvl_0_1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:24:51.019 00:24:51.019 --- 10.0.0.2 ping statistics --- 00:24:51.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.019 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:24:51.019 00:24:51.019 --- 10.0.0.1 ping statistics --- 00:24:51.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.019 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2613986 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2613986 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2613986 ']' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 [2024-11-20 17:19:08.454542] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:24:51.019 [2024-11-20 17:19:08.454586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.019 [2024-11-20 17:19:08.532086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.019 [2024-11-20 17:19:08.572336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.019 [2024-11-20 17:19:08.572371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.019 [2024-11-20 17:19:08.572378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.019 [2024-11-20 17:19:08.572384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.019 [2024-11-20 17:19:08.572389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.019 [2024-11-20 17:19:08.572945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 [2024-11-20 17:19:08.724894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.019 [2024-11-20 17:19:08.733084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:51.019 null0 00:24:51.019 [2024-11-20 17:19:08.765064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2614009 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2614009 /tmp/host.sock 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2614009 ']' 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:51.019 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 [2024-11-20 17:19:08.831603] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:24:51.019 [2024-11-20 17:19:08.831644] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614009 ] 00:24:51.019 [2024-11-20 17:19:08.905200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.019 [2024-11-20 17:19:08.947237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.019 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.277 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.277 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:51.277 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.277 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.211 [2024-11-20 17:19:10.132687] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:52.211 [2024-11-20 17:19:10.132709] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:52.211 [2024-11-20 17:19:10.132725] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.469 [2024-11-20 17:19:10.259114] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:52.469 [2024-11-20 17:19:10.320783] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:52.469 [2024-11-20 17:19:10.321566] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x13eea10:1 started. 00:24:52.469 [2024-11-20 17:19:10.322880] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:52.469 [2024-11-20 17:19:10.322921] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:52.469 [2024-11-20 17:19:10.322940] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:52.469 [2024-11-20 17:19:10.322954] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.469 [2024-11-20 17:19:10.322973] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.470 [2024-11-20 17:19:10.330359] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x13eea10 was disconnected and freed. delete nvme_qpair. 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.470 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.728 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.728 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.659 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.593 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.968 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.902 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.838 [2024-11-20 17:19:15.764639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:57.838 [2024-11-20 17:19:15.764675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.838 [2024-11-20 17:19:15.764685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.838 [2024-11-20 17:19:15.764710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.838 [2024-11-20 17:19:15.764717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.838 [2024-11-20 17:19:15.764725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.838 [2024-11-20 17:19:15.764732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.838 [2024-11-20 17:19:15.764739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.838 [2024-11-20 17:19:15.764745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.838 [2024-11-20 17:19:15.764752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.838 [2024-11-20 17:19:15.764759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.838 [2024-11-20 17:19:15.764765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb220 is same with the state(6) to be set 00:24:57.838 [2024-11-20 17:19:15.774660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cb220 (9): Bad file descriptor 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.838 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.838 [2024-11-20 17:19:15.784697] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:57.838 [2024-11-20 17:19:15.784709] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:57.838 [2024-11-20 17:19:15.784713] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.838 [2024-11-20 17:19:15.784718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.838 [2024-11-20 17:19:15.784739] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.774 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.774 [2024-11-20 17:19:16.807270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:58.774 [2024-11-20 17:19:16.807354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cb220 with addr=10.0.0.2, port=4420 00:24:58.774 [2024-11-20 17:19:16.807387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb220 is same with the state(6) to be set 00:24:58.774 [2024-11-20 17:19:16.807441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cb220 (9): Bad file descriptor 00:24:58.774 [2024-11-20 17:19:16.808395] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:58.774 [2024-11-20 17:19:16.808458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.774 [2024-11-20 17:19:16.808481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.774 [2024-11-20 17:19:16.808504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.774 [2024-11-20 17:19:16.808524] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.774 [2024-11-20 17:19:16.808540] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.774 [2024-11-20 17:19:16.808553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.774 [2024-11-20 17:19:16.808575] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.774 [2024-11-20 17:19:16.808589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.033 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.033 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:59.033 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.968 [2024-11-20 17:19:17.811108] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.968 [2024-11-20 17:19:17.811129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.968 [2024-11-20 17:19:17.811140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.968 [2024-11-20 17:19:17.811147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.968 [2024-11-20 17:19:17.811154] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:59.968 [2024-11-20 17:19:17.811161] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.968 [2024-11-20 17:19:17.811165] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.968 [2024-11-20 17:19:17.811169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.968 [2024-11-20 17:19:17.811191] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:59.968 [2024-11-20 17:19:17.811213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.968 [2024-11-20 17:19:17.811223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.968 [2024-11-20 17:19:17.811232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.968 [2024-11-20 17:19:17.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.968 [2024-11-20 17:19:17.811249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.968 [2024-11-20 17:19:17.811255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.968 [2024-11-20 17:19:17.811262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.968 [2024-11-20 17:19:17.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.968 [2024-11-20 17:19:17.811275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.968 [2024-11-20 17:19:17.811282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.968 [2024-11-20 17:19:17.811289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:59.968 [2024-11-20 17:19:17.811705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ba900 (9): Bad file descriptor 00:24:59.968 [2024-11-20 17:19:17.812716] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:59.968 [2024-11-20 17:19:17.812726] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.968 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.226 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:00.226 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:01.161 17:19:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.096 [2024-11-20 17:19:19.866667] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:02.096 [2024-11-20 17:19:19.866684] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:02.096 [2024-11-20 17:19:19.866696] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:02.096 [2024-11-20 17:19:19.993097] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:02.096 17:19:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.354 [2024-11-20 17:19:20.168077] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:02.354 [2024-11-20 17:19:20.168753] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x13bf830:1 started. 00:25:02.354 [2024-11-20 17:19:20.169781] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:02.354 [2024-11-20 17:19:20.169812] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:02.354 [2024-11-20 17:19:20.169829] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:02.354 [2024-11-20 17:19:20.169843] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:02.354 [2024-11-20 17:19:20.169850] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:02.354 [2024-11-20 17:19:20.175586] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x13bf830 was disconnected and freed. delete nvme_qpair. 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2614009 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2614009 ']' 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2614009 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:03.339 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614009 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614009' 00:25:03.340 killing process with pid 2614009 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2614009 00:25:03.340 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2614009 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.624 rmmod nvme_tcp 00:25:03.624 rmmod nvme_fabrics 00:25:03.624 rmmod nvme_keyring 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2613986 ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2613986 ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613986' 00:25:03.624 killing process with pid 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2613986 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.624 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.883 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.883 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.883 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.883 17:19:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.882 00:25:05.882 real 0m21.484s 00:25:05.882 user 0m26.678s 00:25:05.882 sys 0m5.939s 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 ************************************ 00:25:05.882 END TEST nvmf_discovery_remove_ifc 00:25:05.882 ************************************ 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.882 ************************************ 00:25:05.882 START TEST nvmf_identify_kernel_target 00:25:05.882 ************************************ 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:05.882 * Looking for test storage... 00:25:05.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.882 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.141 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.142 --rc genhtml_branch_coverage=1 00:25:06.142 --rc genhtml_function_coverage=1 00:25:06.142 --rc genhtml_legend=1 00:25:06.142 --rc geninfo_all_blocks=1 00:25:06.142 --rc geninfo_unexecuted_blocks=1 00:25:06.142 00:25:06.142 ' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.142 --rc genhtml_branch_coverage=1 00:25:06.142 --rc genhtml_function_coverage=1 00:25:06.142 --rc genhtml_legend=1 00:25:06.142 --rc geninfo_all_blocks=1 00:25:06.142 --rc geninfo_unexecuted_blocks=1 00:25:06.142 00:25:06.142 ' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.142 --rc genhtml_branch_coverage=1 00:25:06.142 --rc genhtml_function_coverage=1 00:25:06.142 --rc genhtml_legend=1 00:25:06.142 --rc geninfo_all_blocks=1 00:25:06.142 --rc geninfo_unexecuted_blocks=1 00:25:06.142 00:25:06.142 ' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.142 --rc genhtml_branch_coverage=1 00:25:06.142 --rc genhtml_function_coverage=1 00:25:06.142 --rc genhtml_legend=1 00:25:06.142 --rc geninfo_all_blocks=1 00:25:06.142 --rc geninfo_unexecuted_blocks=1 00:25:06.142 00:25:06.142 ' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.142 17:19:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.142 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.142 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:06.142 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.143 17:19:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:12.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:12.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.711 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:12.712 Found net devices under 0000:86:00.0: cvl_0_0 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:12.712 Found net devices under 0000:86:00.1: cvl_0_1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:25:12.712 00:25:12.712 --- 10.0.0.2 ping statistics --- 00:25:12.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.712 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:25:12.712 00:25:12.712 --- 10.0.0.1 ping statistics --- 00:25:12.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.712 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:12.712 17:19:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:15.247 Waiting for block devices as requested 00:25:15.247 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:15.247 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:15.247 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:15.247 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:15.247 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:15.248 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:15.248 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:15.507 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:15.507 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:15.507 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:15.766 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:15.766 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:15.766 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:15.766 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:16.025 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:16.025 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:16.025 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:16.284 No valid GPT data, bailing 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:16.284 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:16.285 00:25:16.285 Discovery Log Number of Records 2, Generation counter 2 00:25:16.285 =====Discovery Log Entry 0====== 00:25:16.285 trtype: tcp 00:25:16.285 adrfam: ipv4 00:25:16.285 subtype: current discovery subsystem 00:25:16.285 treq: not specified, sq flow control disable supported 00:25:16.285 portid: 1 00:25:16.285 trsvcid: 4420 00:25:16.285 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:16.285 traddr: 10.0.0.1 00:25:16.285 eflags: none 00:25:16.285 sectype: none 00:25:16.285 =====Discovery Log Entry 1====== 00:25:16.285 trtype: tcp 00:25:16.285 adrfam: ipv4 00:25:16.285 subtype: nvme subsystem 00:25:16.285 treq: not specified, sq flow control disable supported 00:25:16.285 portid: 1 00:25:16.285 trsvcid: 4420 00:25:16.285 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:16.285 traddr: 10.0.0.1 00:25:16.285 eflags: none 00:25:16.285 sectype: none 00:25:16.285 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:16.285 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:16.545 ===================================================== 00:25:16.545 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:16.545 ===================================================== 00:25:16.545 Controller Capabilities/Features 00:25:16.545 ================================ 00:25:16.545 Vendor ID: 0000 00:25:16.545 Subsystem Vendor ID: 0000 00:25:16.545 Serial Number: 5c0369dc83912791a42c 00:25:16.545 Model Number: Linux 00:25:16.545 Firmware Version: 6.8.9-20 00:25:16.545 Recommended Arb Burst: 0 00:25:16.545 IEEE OUI Identifier: 00 00 00 00:25:16.545 Multi-path I/O 00:25:16.545 May have multiple subsystem ports: No 00:25:16.545 May have multiple controllers: No 00:25:16.545 Associated with SR-IOV VF: No 00:25:16.545 Max Data Transfer Size: Unlimited 00:25:16.545 Max Number of Namespaces: 0 00:25:16.545 Max Number of I/O Queues: 1024 00:25:16.545 NVMe Specification Version (VS): 1.3 00:25:16.545 NVMe Specification Version (Identify): 1.3 00:25:16.545 Maximum Queue Entries: 1024 00:25:16.545 Contiguous Queues Required: No 00:25:16.545 Arbitration Mechanisms Supported 00:25:16.545 Weighted Round Robin: Not Supported 00:25:16.545 Vendor Specific: Not Supported 00:25:16.545 Reset Timeout: 7500 ms 00:25:16.545 Doorbell Stride: 4 bytes 00:25:16.545 NVM Subsystem Reset: Not Supported 00:25:16.545 Command Sets Supported 00:25:16.545 NVM Command Set: Supported 00:25:16.545 Boot Partition: Not Supported 00:25:16.545 Memory Page Size Minimum: 4096 bytes 00:25:16.545 Memory Page Size Maximum: 4096 bytes 00:25:16.545 Persistent Memory Region: Not Supported 00:25:16.545 Optional Asynchronous Events Supported 00:25:16.545 Namespace Attribute Notices: Not Supported 00:25:16.545 Firmware Activation Notices: Not Supported 00:25:16.545 ANA Change Notices: Not Supported 00:25:16.545 PLE Aggregate Log Change Notices: Not Supported 00:25:16.545 LBA Status Info Alert Notices: Not Supported 00:25:16.545 EGE Aggregate Log Change Notices: Not Supported 00:25:16.545 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.545 Zone Descriptor Change Notices: Not Supported 00:25:16.545 Discovery Log Change Notices: Supported 00:25:16.545 Controller Attributes 00:25:16.545 128-bit Host Identifier: Not Supported 00:25:16.545 Non-Operational Permissive Mode: Not Supported 00:25:16.545 NVM Sets: Not Supported 00:25:16.545 Read Recovery Levels: Not Supported 00:25:16.545 Endurance Groups: Not Supported 00:25:16.545 Predictable Latency Mode: Not Supported 00:25:16.545 Traffic Based Keep ALive: Not Supported 00:25:16.545 Namespace Granularity: Not Supported 00:25:16.545 SQ Associations: Not Supported 00:25:16.545 UUID List: Not Supported 00:25:16.545 Multi-Domain Subsystem: Not Supported 00:25:16.545 Fixed Capacity Management: Not Supported 00:25:16.545 Variable Capacity Management: Not Supported 00:25:16.545 Delete Endurance Group: Not Supported 00:25:16.545 Delete NVM Set: Not Supported 00:25:16.545 Extended LBA Formats Supported: Not Supported 00:25:16.545 Flexible Data Placement Supported: Not Supported 00:25:16.545 00:25:16.545 Controller Memory Buffer Support 00:25:16.545 ================================ 00:25:16.545 Supported: No 00:25:16.545 00:25:16.545 Persistent Memory Region Support 00:25:16.545 ================================ 00:25:16.545 Supported: No 00:25:16.545 00:25:16.545 Admin Command Set Attributes 00:25:16.545 ============================ 00:25:16.545 Security Send/Receive: Not Supported 00:25:16.545 Format NVM: Not Supported 00:25:16.545 Firmware Activate/Download: Not Supported 00:25:16.545 Namespace Management: Not Supported 00:25:16.545 Device Self-Test: Not Supported 00:25:16.545 Directives: Not Supported 00:25:16.545 NVMe-MI: Not Supported 00:25:16.545 Virtualization Management: Not Supported 00:25:16.545 Doorbell Buffer Config: Not Supported 00:25:16.545 Get LBA Status Capability: Not Supported 00:25:16.545 Command & Feature Lockdown Capability: Not Supported 00:25:16.545 Abort Command Limit: 1 00:25:16.545 Async Event Request Limit: 1 00:25:16.545 Number of Firmware Slots: N/A 00:25:16.545 Firmware Slot 1 Read-Only: N/A 00:25:16.545 Firmware Activation Without Reset: N/A 00:25:16.545 Multiple Update Detection Support: N/A 00:25:16.545 Firmware Update Granularity: No Information Provided 00:25:16.545 Per-Namespace SMART Log: No 00:25:16.545 Asymmetric Namespace Access Log Page: Not Supported 00:25:16.545 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:16.545 Command Effects Log Page: Not Supported 00:25:16.545 Get Log Page Extended Data: Supported 00:25:16.545 Telemetry Log Pages: Not Supported 00:25:16.545 Persistent Event Log Pages: Not Supported 00:25:16.545 Supported Log Pages Log Page: May Support 00:25:16.545 Commands Supported & Effects Log Page: Not Supported 00:25:16.545 Feature Identifiers & Effects Log Page:May Support 00:25:16.545 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.545 Data Area 4 for Telemetry Log: Not Supported 00:25:16.545 Error Log Page Entries Supported: 1 00:25:16.545 Keep Alive: Not Supported 00:25:16.545 00:25:16.545 NVM Command Set Attributes 00:25:16.545 ========================== 00:25:16.545 Submission Queue Entry Size 00:25:16.546 Max: 1 00:25:16.546 Min: 1 00:25:16.546 Completion Queue Entry Size 00:25:16.546 Max: 1 00:25:16.546 Min: 1 00:25:16.546 Number of Namespaces: 0 00:25:16.546 Compare Command: Not Supported 00:25:16.546 Write Uncorrectable Command: Not Supported 00:25:16.546 Dataset Management Command: Not Supported 00:25:16.546 Write Zeroes Command: Not Supported 00:25:16.546 Set Features Save Field: Not Supported 00:25:16.546 Reservations: Not Supported 00:25:16.546 Timestamp: Not Supported 00:25:16.546 Copy: Not Supported 00:25:16.546 Volatile Write Cache: Not Present 00:25:16.546 Atomic Write Unit (Normal): 1 00:25:16.546 Atomic Write Unit (PFail): 1 00:25:16.546 Atomic Compare & Write Unit: 1 00:25:16.546 Fused Compare & Write: Not Supported 00:25:16.546 Scatter-Gather List 00:25:16.546 SGL Command Set: Supported 00:25:16.546 SGL Keyed: Not Supported 00:25:16.546 SGL Bit Bucket Descriptor: Not Supported 00:25:16.546 SGL Metadata Pointer: Not Supported 00:25:16.546 Oversized SGL: Not Supported 00:25:16.546 SGL Metadata Address: Not Supported 00:25:16.546 SGL Offset: Supported 00:25:16.546 Transport SGL Data Block: Not Supported 00:25:16.546 Replay Protected Memory Block: Not Supported 00:25:16.546 00:25:16.546 Firmware Slot Information 00:25:16.546 ========================= 00:25:16.546 Active slot: 0 00:25:16.546 00:25:16.546 00:25:16.546 Error Log 00:25:16.546 ========= 00:25:16.546 00:25:16.546 Active Namespaces 00:25:16.546 ================= 00:25:16.546 Discovery Log Page 00:25:16.546 ================== 00:25:16.546 Generation Counter: 2 00:25:16.546 Number of Records: 2 00:25:16.546 Record Format: 0 00:25:16.546 00:25:16.546 Discovery Log Entry 0 00:25:16.546 ---------------------- 00:25:16.546 Transport Type: 3 (TCP) 00:25:16.546 Address Family: 1 (IPv4) 00:25:16.546 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:16.546 Entry Flags: 00:25:16.546 Duplicate Returned Information: 0 00:25:16.546 Explicit Persistent Connection Support for Discovery: 0 00:25:16.546 Transport Requirements: 00:25:16.546 Secure Channel: Not Specified 00:25:16.546 Port ID: 1 (0x0001) 00:25:16.546 Controller ID: 65535 (0xffff) 00:25:16.546 Admin Max SQ Size: 32 00:25:16.546 Transport Service Identifier: 4420 00:25:16.546 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:16.546 Transport Address: 10.0.0.1 00:25:16.546 Discovery Log Entry 1 00:25:16.546 ---------------------- 00:25:16.546 Transport Type: 3 (TCP) 00:25:16.546 Address Family: 1 (IPv4) 00:25:16.546 Subsystem Type: 2 (NVM Subsystem) 00:25:16.546 Entry Flags: 00:25:16.546 Duplicate Returned Information: 0 00:25:16.546 Explicit Persistent Connection Support for Discovery: 0 00:25:16.546 Transport Requirements: 00:25:16.546 Secure Channel: Not Specified 00:25:16.546 Port ID: 1 (0x0001) 00:25:16.546 Controller ID: 65535 (0xffff) 00:25:16.546 Admin Max SQ Size: 32 00:25:16.546 Transport Service Identifier: 4420 00:25:16.546 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:16.546 Transport Address: 10.0.0.1 00:25:16.546 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:16.546 get_feature(0x01) failed 00:25:16.546 get_feature(0x02) failed 00:25:16.546 get_feature(0x04) failed 00:25:16.546 ===================================================== 00:25:16.546 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:16.546 ===================================================== 00:25:16.546 Controller Capabilities/Features 00:25:16.546 ================================ 00:25:16.546 Vendor ID: 0000 00:25:16.546 Subsystem Vendor ID: 0000 00:25:16.546 Serial Number: e8964fc655725ba43387 00:25:16.546 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:16.546 Firmware Version: 6.8.9-20 00:25:16.546 Recommended Arb Burst: 6 00:25:16.546 IEEE OUI Identifier: 00 00 00 00:25:16.546 Multi-path I/O 00:25:16.546 May have multiple subsystem ports: Yes 00:25:16.546 May have multiple controllers: Yes 00:25:16.546 Associated with SR-IOV VF: No 00:25:16.546 Max Data Transfer Size: Unlimited 00:25:16.546 Max Number of Namespaces: 1024 00:25:16.546 Max Number of I/O Queues: 128 00:25:16.546 NVMe Specification Version (VS): 1.3 00:25:16.546 NVMe Specification Version (Identify): 1.3 00:25:16.546 Maximum Queue Entries: 1024 00:25:16.546 Contiguous Queues Required: No 00:25:16.546 Arbitration Mechanisms Supported 00:25:16.546 Weighted Round Robin: Not Supported 00:25:16.546 Vendor Specific: Not Supported 00:25:16.546 Reset Timeout: 7500 ms 00:25:16.546 Doorbell Stride: 4 bytes 00:25:16.546 NVM Subsystem Reset: Not Supported 00:25:16.546 Command Sets Supported 00:25:16.546 NVM Command Set: Supported 00:25:16.546 Boot Partition: Not Supported 00:25:16.546 Memory Page Size Minimum: 4096 bytes 00:25:16.546 Memory Page Size Maximum: 4096 bytes 00:25:16.546 Persistent Memory Region: Not Supported 00:25:16.546 Optional Asynchronous Events Supported 00:25:16.546 Namespace Attribute Notices: Supported 00:25:16.546 Firmware Activation Notices: Not Supported 00:25:16.546 ANA Change Notices: Supported 00:25:16.546 PLE Aggregate Log Change Notices: Not Supported 00:25:16.546 LBA Status Info Alert Notices: Not Supported 00:25:16.546 EGE Aggregate Log Change Notices: Not Supported 00:25:16.546 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.546 Zone Descriptor Change Notices: Not Supported 00:25:16.546 Discovery Log Change Notices: Not Supported 00:25:16.546 Controller Attributes 00:25:16.546 128-bit Host Identifier: Supported 00:25:16.546 Non-Operational Permissive Mode: Not Supported 00:25:16.546 NVM Sets: Not Supported 00:25:16.546 Read Recovery Levels: Not Supported 00:25:16.546 Endurance Groups: Not Supported 00:25:16.546 Predictable Latency Mode: Not Supported 00:25:16.546 Traffic Based Keep ALive: Supported 00:25:16.546 Namespace Granularity: Not Supported 00:25:16.546 SQ Associations: Not Supported 00:25:16.546 UUID List: Not Supported 00:25:16.546 Multi-Domain Subsystem: Not Supported 00:25:16.546 Fixed Capacity Management: Not Supported 00:25:16.546 Variable Capacity Management: Not Supported 00:25:16.546 Delete Endurance Group: Not Supported 00:25:16.546 Delete NVM Set: Not Supported 00:25:16.546 Extended LBA Formats Supported: Not Supported 00:25:16.546 Flexible Data Placement Supported: Not Supported 00:25:16.546 00:25:16.546 Controller Memory Buffer Support 00:25:16.546 ================================ 00:25:16.546 Supported: No 00:25:16.546 00:25:16.546 Persistent Memory Region Support 00:25:16.547 ================================ 00:25:16.547 Supported: No 00:25:16.547 00:25:16.547 Admin Command Set Attributes 00:25:16.547 ============================ 00:25:16.547 Security Send/Receive: Not Supported 00:25:16.547 Format NVM: Not Supported 00:25:16.547 Firmware Activate/Download: Not Supported 00:25:16.547 Namespace Management: Not Supported 00:25:16.547 Device Self-Test: Not Supported 00:25:16.547 Directives: Not Supported 00:25:16.547 NVMe-MI: Not Supported 00:25:16.547 Virtualization Management: Not Supported 00:25:16.547 Doorbell Buffer Config: Not Supported 00:25:16.547 Get LBA Status Capability: Not Supported 00:25:16.547 Command & Feature Lockdown Capability: Not Supported 00:25:16.547 Abort Command Limit: 4 00:25:16.547 Async Event Request Limit: 4 00:25:16.547 Number of Firmware Slots: N/A 00:25:16.547 Firmware Slot 1 Read-Only: N/A 00:25:16.547 Firmware Activation Without Reset: N/A 00:25:16.547 Multiple Update Detection Support: N/A 00:25:16.547 Firmware Update Granularity: No Information Provided 00:25:16.547 Per-Namespace SMART Log: Yes 00:25:16.547 Asymmetric Namespace Access Log Page: Supported 00:25:16.547 ANA Transition Time : 10 sec 00:25:16.547 00:25:16.547 Asymmetric Namespace Access Capabilities 00:25:16.547 ANA Optimized State : Supported 00:25:16.547 ANA Non-Optimized State : Supported 00:25:16.547 ANA Inaccessible State : Supported 00:25:16.547 ANA Persistent Loss State : Supported 00:25:16.547 ANA Change State : Supported 00:25:16.547 ANAGRPID is not changed : No 00:25:16.547 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:16.547 00:25:16.547 ANA Group Identifier Maximum : 128 00:25:16.547 Number of ANA Group Identifiers : 128 00:25:16.547 Max Number of Allowed Namespaces : 1024 00:25:16.547 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:16.547 Command Effects Log Page: Supported 00:25:16.547 Get Log Page Extended Data: Supported 00:25:16.547 Telemetry Log Pages: Not Supported 00:25:16.547 Persistent Event Log Pages: Not Supported 00:25:16.547 Supported Log Pages Log Page: May Support 00:25:16.547 Commands Supported & Effects Log Page: Not Supported 00:25:16.547 Feature Identifiers & Effects Log Page:May Support 00:25:16.547 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.547 Data Area 4 for Telemetry Log: Not Supported 00:25:16.547 Error Log Page Entries Supported: 128 00:25:16.547 Keep Alive: Supported 00:25:16.547 Keep Alive Granularity: 1000 ms 00:25:16.547 00:25:16.547 NVM Command Set Attributes 00:25:16.547 ========================== 00:25:16.547 Submission Queue Entry Size 00:25:16.547 Max: 64 00:25:16.547 Min: 64 00:25:16.547 Completion Queue Entry Size 00:25:16.547 Max: 16 00:25:16.547 Min: 16 00:25:16.547 Number of Namespaces: 1024 00:25:16.547 Compare Command: Not Supported 00:25:16.547 Write Uncorrectable Command: Not Supported 00:25:16.547 Dataset Management Command: Supported 00:25:16.547 Write Zeroes Command: Supported 00:25:16.547 Set Features Save Field: Not Supported 00:25:16.547 Reservations: Not Supported 00:25:16.547 Timestamp: Not Supported 00:25:16.547 Copy: Not Supported 00:25:16.547 Volatile Write Cache: Present 00:25:16.547 Atomic Write Unit (Normal): 1 00:25:16.547 Atomic Write Unit (PFail): 1 00:25:16.547 Atomic Compare & Write Unit: 1 00:25:16.547 Fused Compare & Write: Not Supported 00:25:16.547 Scatter-Gather List 00:25:16.547 SGL Command Set: Supported 00:25:16.547 SGL Keyed: Not Supported 00:25:16.547 SGL Bit Bucket Descriptor: Not Supported 00:25:16.547 SGL Metadata Pointer: Not Supported 00:25:16.547 Oversized SGL: Not Supported 00:25:16.547 SGL Metadata Address: Not Supported 00:25:16.547 SGL Offset: Supported 00:25:16.547 Transport SGL Data Block: Not Supported 00:25:16.547 Replay Protected Memory Block: Not Supported 00:25:16.547 00:25:16.547 Firmware Slot Information 00:25:16.547 ========================= 00:25:16.547 Active slot: 0 00:25:16.547 00:25:16.547 Asymmetric Namespace Access 00:25:16.547 =========================== 00:25:16.547 Change Count : 0 00:25:16.547 Number of ANA Group Descriptors : 1 00:25:16.547 ANA Group Descriptor : 0 00:25:16.547 ANA Group ID : 1 00:25:16.547 Number of NSID Values : 1 00:25:16.547 Change Count : 0 00:25:16.547 ANA State : 1 00:25:16.547 Namespace Identifier : 1 00:25:16.547 00:25:16.547 Commands Supported and Effects 00:25:16.547 ============================== 00:25:16.547 Admin Commands 00:25:16.547 -------------- 00:25:16.547 Get Log Page (02h): Supported 00:25:16.547 Identify (06h): Supported 00:25:16.547 Abort (08h): Supported 00:25:16.547 Set Features (09h): Supported 00:25:16.547 Get Features (0Ah): Supported 00:25:16.547 Asynchronous Event Request (0Ch): Supported 00:25:16.547 Keep Alive (18h): Supported 00:25:16.547 I/O Commands 00:25:16.547 ------------ 00:25:16.547 Flush (00h): Supported 00:25:16.547 Write (01h): Supported LBA-Change 00:25:16.547 Read (02h): Supported 00:25:16.547 Write Zeroes (08h): Supported LBA-Change 00:25:16.547 Dataset Management (09h): Supported 00:25:16.547 00:25:16.547 Error Log 00:25:16.547 ========= 00:25:16.547 Entry: 0 00:25:16.547 Error Count: 0x3 00:25:16.547 Submission Queue Id: 0x0 00:25:16.547 Command Id: 0x5 00:25:16.547 Phase Bit: 0 00:25:16.547 Status Code: 0x2 00:25:16.547 Status Code Type: 0x0 00:25:16.547 Do Not Retry: 1 00:25:16.547 Error Location: 0x28 00:25:16.547 LBA: 0x0 00:25:16.547 Namespace: 0x0 00:25:16.547 Vendor Log Page: 0x0 00:25:16.547 ----------- 00:25:16.547 Entry: 1 00:25:16.547 Error Count: 0x2 00:25:16.547 Submission Queue Id: 0x0 00:25:16.547 Command Id: 0x5 00:25:16.547 Phase Bit: 0 00:25:16.547 Status Code: 0x2 00:25:16.547 Status Code Type: 0x0 00:25:16.547 Do Not Retry: 1 00:25:16.547 Error Location: 0x28 00:25:16.548 LBA: 0x0 00:25:16.548 Namespace: 0x0 00:25:16.548 Vendor Log Page: 0x0 00:25:16.548 ----------- 00:25:16.548 Entry: 2 00:25:16.548 Error Count: 0x1 00:25:16.548 Submission Queue Id: 0x0 00:25:16.548 Command Id: 0x4 00:25:16.548 Phase Bit: 0 00:25:16.548 Status Code: 0x2 00:25:16.548 Status Code Type: 0x0 00:25:16.548 Do Not Retry: 1 00:25:16.548 Error Location: 0x28 00:25:16.548 LBA: 0x0 00:25:16.548 Namespace: 0x0 00:25:16.548 Vendor Log Page: 0x0 00:25:16.548 00:25:16.548 Number of Queues 00:25:16.548 ================ 00:25:16.548 Number of I/O Submission Queues: 128 00:25:16.548 Number of I/O Completion Queues: 128 00:25:16.548 00:25:16.548 ZNS Specific Controller Data 00:25:16.548 ============================ 00:25:16.548 Zone Append Size Limit: 0 00:25:16.548 00:25:16.548 00:25:16.548 Active Namespaces 00:25:16.548 ================= 00:25:16.548 get_feature(0x05) failed 00:25:16.548 Namespace ID:1 00:25:16.548 Command Set Identifier: NVM (00h) 00:25:16.548 Deallocate: Supported 00:25:16.548 Deallocated/Unwritten Error: Not Supported 00:25:16.548 Deallocated Read Value: Unknown 00:25:16.548 Deallocate in Write Zeroes: Not Supported 00:25:16.548 Deallocated Guard Field: 0xFFFF 00:25:16.548 Flush: Supported 00:25:16.548 Reservation: Not Supported 00:25:16.548 Namespace Sharing Capabilities: Multiple Controllers 00:25:16.548 Size (in LBAs): 3125627568 (1490GiB) 00:25:16.548 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:16.548 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:16.548 UUID: aece6f8b-54f7-480a-80ff-f0c36bf455e2 00:25:16.548 Thin Provisioning: Not Supported 00:25:16.548 Per-NS Atomic Units: Yes 00:25:16.548 Atomic Boundary Size (Normal): 0 00:25:16.548 Atomic Boundary Size (PFail): 0 00:25:16.548 Atomic Boundary Offset: 0 00:25:16.548 NGUID/EUI64 Never Reused: No 00:25:16.548 ANA group ID: 1 00:25:16.548 Namespace Write Protected: No 00:25:16.548 Number of LBA Formats: 1 00:25:16.548 Current LBA Format: LBA Format #00 00:25:16.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.548 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.548 rmmod nvme_tcp 00:25:16.548 rmmod nvme_fabrics 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.548 17:19:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:19.084 17:19:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:21.616 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:21.616 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:23.005 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:23.264 00:25:23.264 real 0m17.321s 00:25:23.264 user 0m4.387s 00:25:23.264 sys 0m8.709s 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.264 ************************************ 00:25:23.264 END TEST nvmf_identify_kernel_target 00:25:23.264 ************************************ 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.264 ************************************ 00:25:23.264 START TEST nvmf_auth_host 00:25:23.264 ************************************ 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:23.264 * Looking for test storage... 00:25:23.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.264 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.524 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.525 --rc genhtml_branch_coverage=1 00:25:23.525 --rc genhtml_function_coverage=1 00:25:23.525 --rc genhtml_legend=1 00:25:23.525 --rc geninfo_all_blocks=1 00:25:23.525 --rc geninfo_unexecuted_blocks=1 00:25:23.525 00:25:23.525 ' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.525 --rc genhtml_branch_coverage=1 00:25:23.525 --rc genhtml_function_coverage=1 00:25:23.525 --rc genhtml_legend=1 00:25:23.525 --rc geninfo_all_blocks=1 00:25:23.525 --rc geninfo_unexecuted_blocks=1 00:25:23.525 00:25:23.525 ' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.525 --rc genhtml_branch_coverage=1 00:25:23.525 --rc genhtml_function_coverage=1 00:25:23.525 --rc genhtml_legend=1 00:25:23.525 --rc geninfo_all_blocks=1 00:25:23.525 --rc geninfo_unexecuted_blocks=1 00:25:23.525 00:25:23.525 ' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.525 --rc genhtml_branch_coverage=1 00:25:23.525 --rc genhtml_function_coverage=1 00:25:23.525 --rc genhtml_legend=1 00:25:23.525 --rc geninfo_all_blocks=1 00:25:23.525 --rc geninfo_unexecuted_blocks=1 00:25:23.525 00:25:23.525 ' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:23.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.525 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.526 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:23.526 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:23.526 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:23.526 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:30.096 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.096 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:30.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:30.097 Found net devices under 0000:86:00.0: cvl_0_0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:30.097 Found net devices under 0000:86:00.1: cvl_0_1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:25:30.097 00:25:30.097 --- 10.0.0.2 ping statistics --- 00:25:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.097 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:30.097 00:25:30.097 --- 10.0.0.1 ping statistics --- 00:25:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.097 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2626228 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2626228 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2626228 ']' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.097 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3427ee8e1b9424a73107f2369a9185d2 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.K0e 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3427ee8e1b9424a73107f2369a9185d2 0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3427ee8e1b9424a73107f2369a9185d2 0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3427ee8e1b9424a73107f2369a9185d2 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.K0e 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.K0e 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.K0e 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6fdd49ce6111abd68da7c1fe5b318849772f95d167ab1bf2dbb5bb84e2947e3 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tYf 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6fdd49ce6111abd68da7c1fe5b318849772f95d167ab1bf2dbb5bb84e2947e3 3 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6fdd49ce6111abd68da7c1fe5b318849772f95d167ab1bf2dbb5bb84e2947e3 3 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6fdd49ce6111abd68da7c1fe5b318849772f95d167ab1bf2dbb5bb84e2947e3 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tYf 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tYf 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tYf 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6e2246e5c57051f6bfed8eb41222d9b59fc71d075fa44e6 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZDc 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6e2246e5c57051f6bfed8eb41222d9b59fc71d075fa44e6 0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6e2246e5c57051f6bfed8eb41222d9b59fc71d075fa44e6 0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6e2246e5c57051f6bfed8eb41222d9b59fc71d075fa44e6 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZDc 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZDc 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZDc 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f3190a04d07fd282f1a7c446f4c5b7b236932611c477763 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.liO 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f3190a04d07fd282f1a7c446f4c5b7b236932611c477763 2 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f3190a04d07fd282f1a7c446f4c5b7b236932611c477763 2 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f3190a04d07fd282f1a7c446f4c5b7b236932611c477763 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.liO 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.liO 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.liO 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4e619a5d56aef1a359b8ba349a8e7528 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.muz 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4e619a5d56aef1a359b8ba349a8e7528 1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4e619a5d56aef1a359b8ba349a8e7528 1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4e619a5d56aef1a359b8ba349a8e7528 00:25:30.098 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.muz 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.muz 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.muz 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a0ccd2d0b84b2e0d435ed9b231bab64 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.D71 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a0ccd2d0b84b2e0d435ed9b231bab64 1 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a0ccd2d0b84b2e0d435ed9b231bab64 1 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a0ccd2d0b84b2e0d435ed9b231bab64 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:30.099 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.D71 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.D71 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.D71 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cbb0f21fc57a8027138f6dda9f3f2ff029456c2fa9d09ac0 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fKQ 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cbb0f21fc57a8027138f6dda9f3f2ff029456c2fa9d09ac0 2 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cbb0f21fc57a8027138f6dda9f3f2ff029456c2fa9d09ac0 2 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cbb0f21fc57a8027138f6dda9f3f2ff029456c2fa9d09ac0 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fKQ 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fKQ 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fKQ 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79a30ea5c978343ecc20f3860dd6c34d 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QZO 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79a30ea5c978343ecc20f3860dd6c34d 0 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79a30ea5c978343ecc20f3860dd6c34d 0 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79a30ea5c978343ecc20f3860dd6c34d 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QZO 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QZO 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.QZO 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.099 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c5f8c46a5a6396eb576f05be4642e753c6b9926e8132723c6d33932490a7d80c 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Eob 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c5f8c46a5a6396eb576f05be4642e753c6b9926e8132723c6d33932490a7d80c 3 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c5f8c46a5a6396eb576f05be4642e753c6b9926e8132723c6d33932490a7d80c 3 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c5f8c46a5a6396eb576f05be4642e753c6b9926e8132723c6d33932490a7d80c 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Eob 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Eob 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Eob 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:30.358 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2626228 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2626228 ']' 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.359 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.617 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.617 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K0e 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tYf ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tYf 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZDc 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.liO ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.liO 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.muz 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.D71 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.D71 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fKQ 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.QZO ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.QZO 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Eob 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:30.618 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:33.150 Waiting for block devices as requested 00:25:33.150 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:33.409 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:33.409 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:33.667 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:33.667 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:33.667 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:33.667 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:33.925 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:33.925 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:33.925 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:33.925 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:34.184 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:34.184 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:34.184 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:34.184 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:34.442 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:34.442 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:35.009 No valid GPT data, bailing 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:35.009 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:35.009 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:35.010 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:35.268 00:25:35.268 Discovery Log Number of Records 2, Generation counter 2 00:25:35.268 =====Discovery Log Entry 0====== 00:25:35.268 trtype: tcp 00:25:35.268 adrfam: ipv4 00:25:35.268 subtype: current discovery subsystem 00:25:35.268 treq: not specified, sq flow control disable supported 00:25:35.268 portid: 1 00:25:35.268 trsvcid: 4420 00:25:35.268 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:35.268 traddr: 10.0.0.1 00:25:35.268 eflags: none 00:25:35.268 sectype: none 00:25:35.268 =====Discovery Log Entry 1====== 00:25:35.268 trtype: tcp 00:25:35.269 adrfam: ipv4 00:25:35.269 subtype: nvme subsystem 00:25:35.269 treq: not specified, sq flow control disable supported 00:25:35.269 portid: 1 00:25:35.269 trsvcid: 4420 00:25:35.269 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:35.269 traddr: 10.0.0.1 00:25:35.269 eflags: none 00:25:35.269 sectype: none 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.269 nvme0n1 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.269 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.526 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.527 nvme0n1 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.527 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 nvme0n1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.785 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.044 nvme0n1 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.044 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.044 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.303 nvme0n1 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.303 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.304 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 nvme0n1 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.563 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.564 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.822 nvme0n1 00:25:36.822 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.822 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.822 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.822 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.823 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 nvme0n1 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.082 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 nvme0n1 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.341 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.600 nvme0n1 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.600 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.601 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.859 nvme0n1 00:25:37.859 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.859 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.859 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.860 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.119 nvme0n1 00:25:38.119 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.119 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.119 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.119 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.119 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.378 nvme0n1 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.378 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.636 nvme0n1 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.636 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.894 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.894 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.894 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:38.894 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.895 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 nvme0n1 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.155 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.413 nvme0n1 00:25:39.413 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.413 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.413 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.413 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.414 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.981 nvme0n1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.981 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.240 nvme0n1 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.240 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.241 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.808 nvme0n1 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:40.808 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.809 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.067 nvme0n1 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.067 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.326 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.584 nvme0n1 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.584 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.585 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.585 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.585 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.151 nvme0n1 00:25:42.151 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.410 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.977 nvme0n1 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:42.977 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.978 17:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.545 nvme0n1 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.545 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.546 17:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.112 nvme0n1 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.112 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.371 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.940 nvme0n1 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.940 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.199 nvme0n1 00:25:45.199 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.199 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.199 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.199 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.199 17:20:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 nvme0n1 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.200 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.458 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 nvme0n1 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.459 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.718 nvme0n1 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:45.718 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.719 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.977 nvme0n1 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.977 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 17:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 nvme0n1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.238 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.497 nvme0n1 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.497 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.756 nvme0n1 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.756 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.757 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.757 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.757 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.015 nvme0n1 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.015 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.016 17:20:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.016 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.016 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.274 nvme0n1 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.274 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.532 nvme0n1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.532 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 nvme0n1 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.791 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.050 17:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.309 nvme0n1 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.309 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.310 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.569 nvme0n1 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.569 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.828 nvme0n1 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.828 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.829 17:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.396 nvme0n1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.396 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.655 nvme0n1 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.655 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.913 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.914 17:20:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.173 nvme0n1 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.173 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.740 nvme0n1 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:50.740 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.741 17:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 nvme0n1 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.000 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.261 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.262 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.834 nvme0n1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.834 17:20:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.402 nvme0n1 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.402 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.403 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.969 nvme0n1 00:25:52.969 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.969 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.969 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.969 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.969 17:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.969 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.228 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.797 nvme0n1 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.797 17:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.368 nvme0n1 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.368 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.627 nvme0n1 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.627 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.628 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.887 nvme0n1 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.887 nvme0n1 00:25:54.887 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.147 17:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.147 nvme0n1 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.147 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.407 nvme0n1 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.407 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.666 nvme0n1 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.666 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 nvme0n1 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.185 17:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.185 nvme0n1 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.185 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.445 nvme0n1 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.445 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.704 nvme0n1 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.704 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.705 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.963 nvme0n1 00:25:56.963 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.963 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.963 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.963 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.963 17:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.963 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.222 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.223 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.482 nvme0n1 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.482 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.483 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.741 nvme0n1 00:25:57.741 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.741 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.741 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.742 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.001 nvme0n1 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.001 17:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.001 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.260 nvme0n1 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.260 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.519 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.779 nvme0n1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.779 17:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.350 nvme0n1 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.350 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.609 nvme0n1 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.609 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.868 17:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.128 nvme0n1 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.128 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.695 nvme0n1 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQyN2VlOGUxYjk0MjRhNzMxMDdmMjM2OWE5MTg1ZDJe3xqT: 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: ]] 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjZmZGQ0OWNlNjExMWFiZDY4ZGE3YzFmZTViMzE4ODQ5NzcyZjk1ZDE2N2FiMWJmMmRiYjViYjg0ZTI5NDdlM8f6u84=: 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.695 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.696 17:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.263 nvme0n1 00:26:01.263 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.263 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.263 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.263 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.264 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.831 nvme0n1 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.831 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.089 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.090 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.090 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.090 17:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.655 nvme0n1 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2JiMGYyMWZjNTdhODAyNzEzOGY2ZGRhOWYzZjJmZjAyOTQ1NmMyZmE5ZDA5YWMwiQFmjg==: 00:26:02.655 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: ]] 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzlhMzBlYTVjOTc4MzQzZWNjMjBmMzg2MGRkNmMzNGSfydbV: 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.656 17:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.223 nvme0n1 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzVmOGM0NmE1YTYzOTZlYjU3NmYwNWJlNDY0MmU3NTNjNmI5OTI2ZTgxMzI3MjNjNmQzMzkzMjQ5MGE3ZDgwYwNWjg4=: 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.223 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.790 nvme0n1 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.790 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.048 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.048 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.048 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.048 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.048 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 request: 00:26:04.049 { 00:26:04.049 "name": "nvme0", 00:26:04.049 "trtype": "tcp", 00:26:04.049 "traddr": "10.0.0.1", 00:26:04.049 "adrfam": "ipv4", 00:26:04.049 "trsvcid": "4420", 00:26:04.049 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:04.049 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:04.049 "prchk_reftag": false, 00:26:04.049 "prchk_guard": false, 00:26:04.049 "hdgst": false, 00:26:04.049 "ddgst": false, 00:26:04.049 "allow_unrecognized_csi": false, 00:26:04.049 "method": "bdev_nvme_attach_controller", 00:26:04.049 "req_id": 1 00:26:04.049 } 00:26:04.049 Got JSON-RPC error response 00:26:04.049 response: 00:26:04.049 { 00:26:04.049 "code": -5, 00:26:04.049 "message": "Input/output error" 00:26:04.049 } 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.049 17:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 request: 00:26:04.049 { 00:26:04.049 "name": "nvme0", 00:26:04.049 "trtype": "tcp", 00:26:04.049 "traddr": "10.0.0.1", 00:26:04.049 "adrfam": "ipv4", 00:26:04.049 "trsvcid": "4420", 00:26:04.049 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:04.049 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:04.049 "prchk_reftag": false, 00:26:04.049 "prchk_guard": false, 00:26:04.049 "hdgst": false, 00:26:04.049 "ddgst": false, 00:26:04.049 "dhchap_key": "key2", 00:26:04.049 "allow_unrecognized_csi": false, 00:26:04.049 "method": "bdev_nvme_attach_controller", 00:26:04.049 "req_id": 1 00:26:04.049 } 00:26:04.049 Got JSON-RPC error response 00:26:04.049 response: 00:26:04.049 { 00:26:04.049 "code": -5, 00:26:04.049 "message": "Input/output error" 00:26:04.049 } 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.308 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.308 request: 00:26:04.308 { 00:26:04.308 "name": "nvme0", 00:26:04.308 "trtype": "tcp", 00:26:04.308 "traddr": "10.0.0.1", 00:26:04.308 "adrfam": "ipv4", 00:26:04.308 "trsvcid": "4420", 00:26:04.308 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:04.308 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:04.308 "prchk_reftag": false, 00:26:04.308 "prchk_guard": false, 00:26:04.308 "hdgst": false, 00:26:04.308 "ddgst": false, 00:26:04.308 "dhchap_key": "key1", 00:26:04.308 "dhchap_ctrlr_key": "ckey2", 00:26:04.308 "allow_unrecognized_csi": false, 00:26:04.308 "method": "bdev_nvme_attach_controller", 00:26:04.308 "req_id": 1 00:26:04.308 } 00:26:04.309 Got JSON-RPC error response 00:26:04.309 response: 00:26:04.309 { 00:26:04.309 "code": -5, 00:26:04.309 "message": "Input/output error" 00:26:04.309 } 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.309 nvme0n1 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.309 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.567 request: 00:26:04.567 { 00:26:04.567 "name": "nvme0", 00:26:04.567 "dhchap_key": "key1", 00:26:04.567 "dhchap_ctrlr_key": "ckey2", 00:26:04.567 "method": "bdev_nvme_set_keys", 00:26:04.567 "req_id": 1 00:26:04.567 } 00:26:04.567 Got JSON-RPC error response 00:26:04.567 response: 00:26:04.567 { 00:26:04.567 "code": -13, 00:26:04.567 "message": "Permission denied" 00:26:04.567 } 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:04.567 17:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:05.942 17:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:06.877 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlMjI0NmU1YzU3MDUxZjZiZmVkOGViNDEyMjJkOWI1OWZjNzFkMDc1ZmE0NGU2lXskIQ==: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2YzMTkwYTA0ZDA3ZmQyODJmMWE3YzQ0NmY0YzViN2IyMzY5MzI2MTFjNDc3NzYz3hOq0Q==: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.878 nvme0n1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGU2MTlhNWQ1NmFlZjFhMzU5YjhiYTM0OWE4ZTc1Mjjnt3En: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWEwY2NkMmQwYjg0YjJlMGQ0MzVlZDliMjMxYmFiNjQpGNHa: 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.878 request: 00:26:06.878 { 00:26:06.878 "name": "nvme0", 00:26:06.878 "dhchap_key": "key2", 00:26:06.878 "dhchap_ctrlr_key": "ckey1", 00:26:06.878 "method": "bdev_nvme_set_keys", 00:26:06.878 "req_id": 1 00:26:06.878 } 00:26:06.878 Got JSON-RPC error response 00:26:06.878 response: 00:26:06.878 { 00:26:06.878 "code": -13, 00:26:06.878 "message": "Permission denied" 00:26:06.878 } 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.878 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.136 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:07.136 17:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.073 17:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.073 rmmod nvme_tcp 00:26:08.073 rmmod nvme_fabrics 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2626228 ']' 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2626228 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2626228 ']' 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2626228 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626228 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626228' 00:26:08.073 killing process with pid 2626228 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2626228 00:26:08.073 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2626228 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.332 17:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:10.868 17:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:13.402 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:13.402 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:14.780 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:14.780 17:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.K0e /tmp/spdk.key-null.ZDc /tmp/spdk.key-sha256.muz /tmp/spdk.key-sha384.fKQ /tmp/spdk.key-sha512.Eob /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:14.780 17:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:18.069 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:18.069 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:18.069 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:18.069 00:26:18.069 real 0m54.498s 00:26:18.069 user 0m48.594s 00:26:18.069 sys 0m12.689s 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 ************************************ 00:26:18.069 END TEST nvmf_auth_host 00:26:18.069 ************************************ 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 ************************************ 00:26:18.069 START TEST nvmf_digest 00:26:18.069 ************************************ 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:18.069 * Looking for test storage... 00:26:18.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.069 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.070 --rc genhtml_branch_coverage=1 00:26:18.070 --rc genhtml_function_coverage=1 00:26:18.070 --rc genhtml_legend=1 00:26:18.070 --rc geninfo_all_blocks=1 00:26:18.070 --rc geninfo_unexecuted_blocks=1 00:26:18.070 00:26:18.070 ' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.070 --rc genhtml_branch_coverage=1 00:26:18.070 --rc genhtml_function_coverage=1 00:26:18.070 --rc genhtml_legend=1 00:26:18.070 --rc geninfo_all_blocks=1 00:26:18.070 --rc geninfo_unexecuted_blocks=1 00:26:18.070 00:26:18.070 ' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.070 --rc genhtml_branch_coverage=1 00:26:18.070 --rc genhtml_function_coverage=1 00:26:18.070 --rc genhtml_legend=1 00:26:18.070 --rc geninfo_all_blocks=1 00:26:18.070 --rc geninfo_unexecuted_blocks=1 00:26:18.070 00:26:18.070 ' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:18.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.070 --rc genhtml_branch_coverage=1 00:26:18.070 --rc genhtml_function_coverage=1 00:26:18.070 --rc genhtml_legend=1 00:26:18.070 --rc geninfo_all_blocks=1 00:26:18.070 --rc geninfo_unexecuted_blocks=1 00:26:18.070 00:26:18.070 ' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:18.070 17:20:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:24.642 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:24.642 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.642 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:24.643 Found net devices under 0000:86:00.0: cvl_0_0 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:24.643 Found net devices under 0000:86:00.1: cvl_0_1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:26:24.643 00:26:24.643 --- 10.0.0.2 ping statistics --- 00:26:24.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.643 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:24.643 00:26:24.643 --- 10.0.0.1 ping statistics --- 00:26:24.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.643 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.643 ************************************ 00:26:24.643 START TEST nvmf_digest_clean 00:26:24.643 ************************************ 00:26:24.643 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2639991 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2639991 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2639991 ']' 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.644 17:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 [2024-11-20 17:20:42.006733] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:24.644 [2024-11-20 17:20:42.006772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.644 [2024-11-20 17:20:42.087459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.644 [2024-11-20 17:20:42.127749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.644 [2024-11-20 17:20:42.127786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.644 [2024-11-20 17:20:42.127793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.644 [2024-11-20 17:20:42.127798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.644 [2024-11-20 17:20:42.127803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.644 [2024-11-20 17:20:42.128391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 null0 00:26:24.644 [2024-11-20 17:20:42.276382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.644 [2024-11-20 17:20:42.300586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2640015 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2640015 /var/tmp/bperf.sock 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2640015 ']' 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.644 [2024-11-20 17:20:42.354565] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:24.644 [2024-11-20 17:20:42.354604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640015 ] 00:26:24.644 [2024-11-20 17:20:42.428962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.644 [2024-11-20 17:20:42.471348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:24.644 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:24.645 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.902 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.902 17:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.159 nvme0n1 00:26:25.159 17:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.159 17:20:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.159 Running I/O for 2 seconds... 00:26:27.465 26179.00 IOPS, 102.26 MiB/s [2024-11-20T16:20:45.508Z] 25982.00 IOPS, 101.49 MiB/s 00:26:27.465 Latency(us) 00:26:27.465 [2024-11-20T16:20:45.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.465 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:27.465 nvme0n1 : 2.04 25489.58 99.57 0.00 0.00 4918.96 2106.51 44189.99 00:26:27.465 [2024-11-20T16:20:45.508Z] =================================================================================================================== 00:26:27.465 [2024-11-20T16:20:45.508Z] Total : 25489.58 99.57 0.00 0.00 4918.96 2106.51 44189.99 00:26:27.465 { 00:26:27.465 "results": [ 00:26:27.465 { 00:26:27.465 "job": "nvme0n1", 00:26:27.465 "core_mask": "0x2", 00:26:27.465 "workload": "randread", 00:26:27.465 "status": "finished", 00:26:27.465 "queue_depth": 128, 00:26:27.465 "io_size": 4096, 00:26:27.465 "runtime": 2.043659, 00:26:27.465 "iops": 25489.575315647082, 00:26:27.465 "mibps": 99.56865357674641, 00:26:27.465 "io_failed": 0, 00:26:27.465 "io_timeout": 0, 00:26:27.465 "avg_latency_us": 4918.956272967607, 00:26:27.465 "min_latency_us": 2106.5142857142855, 00:26:27.465 "max_latency_us": 44189.98857142857 00:26:27.465 } 00:26:27.465 ], 00:26:27.465 "core_count": 1 00:26:27.465 } 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.465 | select(.opcode=="crc32c") 00:26:27.465 | "\(.module_name) \(.executed)"' 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2640015 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2640015 ']' 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2640015 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640015 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640015' 00:26:27.465 killing process with pid 2640015 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2640015 00:26:27.465 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.465 00:26:27.465 Latency(us) 00:26:27.465 [2024-11-20T16:20:45.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.465 [2024-11-20T16:20:45.508Z] =================================================================================================================== 00:26:27.465 [2024-11-20T16:20:45.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.465 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2640015 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2640497 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2640497 /var/tmp/bperf.sock 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2640497 ']' 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:27.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.723 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:27.723 [2024-11-20 17:20:45.693552] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:27.723 [2024-11-20 17:20:45.693600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2640497 ] 00:26:27.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.723 Zero copy mechanism will not be used. 00:26:27.981 [2024-11-20 17:20:45.767412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.981 [2024-11-20 17:20:45.810100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.981 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.981 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:27.981 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:27.981 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:27.981 17:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:28.239 17:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.239 17:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.520 nvme0n1 00:26:28.520 17:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:28.520 17:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.813 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.813 Zero copy mechanism will not be used. 00:26:28.813 Running I/O for 2 seconds... 00:26:30.837 6007.00 IOPS, 750.88 MiB/s [2024-11-20T16:20:48.880Z] 6088.00 IOPS, 761.00 MiB/s 00:26:30.837 Latency(us) 00:26:30.837 [2024-11-20T16:20:48.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.837 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:30.837 nvme0n1 : 2.00 6086.79 760.85 0.00 0.00 2625.93 624.15 4493.90 00:26:30.837 [2024-11-20T16:20:48.880Z] =================================================================================================================== 00:26:30.837 [2024-11-20T16:20:48.880Z] Total : 6086.79 760.85 0.00 0.00 2625.93 624.15 4493.90 00:26:30.837 { 00:26:30.837 "results": [ 00:26:30.837 { 00:26:30.837 "job": "nvme0n1", 00:26:30.837 "core_mask": "0x2", 00:26:30.837 "workload": "randread", 00:26:30.837 "status": "finished", 00:26:30.837 "queue_depth": 16, 00:26:30.837 "io_size": 131072, 00:26:30.837 "runtime": 2.003026, 00:26:30.837 "iops": 6086.790685692547, 00:26:30.837 "mibps": 760.8488357115684, 00:26:30.837 "io_failed": 0, 00:26:30.837 "io_timeout": 0, 00:26:30.837 "avg_latency_us": 2625.9279190101242, 00:26:30.837 "min_latency_us": 624.152380952381, 00:26:30.837 "max_latency_us": 4493.897142857143 00:26:30.837 } 00:26:30.837 ], 00:26:30.837 "core_count": 1 00:26:30.837 } 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:30.837 | select(.opcode=="crc32c") 00:26:30.837 | "\(.module_name) \(.executed)"' 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:30.837 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2640497 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2640497 ']' 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2640497 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640497 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640497' 00:26:30.838 killing process with pid 2640497 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2640497 00:26:30.838 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.838 00:26:30.838 Latency(us) 00:26:30.838 [2024-11-20T16:20:48.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.838 [2024-11-20T16:20:48.881Z] =================================================================================================================== 00:26:30.838 [2024-11-20T16:20:48.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.838 17:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2640497 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2641191 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2641191 /var/tmp/bperf.sock 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2641191 ']' 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.097 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:31.097 [2024-11-20 17:20:49.075309] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:31.097 [2024-11-20 17:20:49.075359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641191 ] 00:26:31.358 [2024-11-20 17:20:49.149778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.358 [2024-11-20 17:20:49.191687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.358 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.358 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:31.358 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:31.358 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:31.358 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:31.619 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.619 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.876 nvme0n1 00:26:31.876 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:31.876 17:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.134 Running I/O for 2 seconds... 00:26:34.002 27970.00 IOPS, 109.26 MiB/s [2024-11-20T16:20:52.045Z] 28343.00 IOPS, 110.71 MiB/s 00:26:34.002 Latency(us) 00:26:34.002 [2024-11-20T16:20:52.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.002 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:34.002 nvme0n1 : 2.00 28343.85 110.72 0.00 0.00 4510.83 2231.34 11421.99 00:26:34.002 [2024-11-20T16:20:52.045Z] =================================================================================================================== 00:26:34.002 [2024-11-20T16:20:52.045Z] Total : 28343.85 110.72 0.00 0.00 4510.83 2231.34 11421.99 00:26:34.002 { 00:26:34.002 "results": [ 00:26:34.002 { 00:26:34.002 "job": "nvme0n1", 00:26:34.002 "core_mask": "0x2", 00:26:34.002 "workload": "randwrite", 00:26:34.002 "status": "finished", 00:26:34.002 "queue_depth": 128, 00:26:34.002 "io_size": 4096, 00:26:34.002 "runtime": 2.002198, 00:26:34.002 "iops": 28343.850108730505, 00:26:34.002 "mibps": 110.71816448722853, 00:26:34.002 "io_failed": 0, 00:26:34.002 "io_timeout": 0, 00:26:34.002 "avg_latency_us": 4510.832019198658, 00:26:34.002 "min_latency_us": 2231.344761904762, 00:26:34.002 "max_latency_us": 11421.988571428572 00:26:34.002 } 00:26:34.002 ], 00:26:34.002 "core_count": 1 00:26:34.002 } 00:26:34.002 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:34.002 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:34.002 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:34.002 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:34.002 | select(.opcode=="crc32c") 00:26:34.002 | "\(.module_name) \(.executed)"' 00:26:34.002 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2641191 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2641191 ']' 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2641191 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641191 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641191' 00:26:34.261 killing process with pid 2641191 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2641191 00:26:34.261 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.261 00:26:34.261 Latency(us) 00:26:34.261 [2024-11-20T16:20:52.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.261 [2024-11-20T16:20:52.304Z] =================================================================================================================== 00:26:34.261 [2024-11-20T16:20:52.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.261 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2641191 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2641665 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2641665 /var/tmp/bperf.sock 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2641665 ']' 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:34.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.520 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:34.520 [2024-11-20 17:20:52.447807] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:34.520 [2024-11-20 17:20:52.447853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641665 ] 00:26:34.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.520 Zero copy mechanism will not be used. 00:26:34.520 [2024-11-20 17:20:52.520010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.779 [2024-11-20 17:20:52.562063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.779 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.779 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:34.779 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:34.779 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:34.779 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:35.037 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.037 17:20:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.295 nvme0n1 00:26:35.295 17:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:35.295 17:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.295 Zero copy mechanism will not be used. 00:26:35.295 Running I/O for 2 seconds... 00:26:37.626 6416.00 IOPS, 802.00 MiB/s [2024-11-20T16:20:55.669Z] 6585.50 IOPS, 823.19 MiB/s 00:26:37.626 Latency(us) 00:26:37.626 [2024-11-20T16:20:55.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.626 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:37.626 nvme0n1 : 2.00 6582.21 822.78 0.00 0.00 2426.39 1966.08 7895.53 00:26:37.626 [2024-11-20T16:20:55.669Z] =================================================================================================================== 00:26:37.626 [2024-11-20T16:20:55.669Z] Total : 6582.21 822.78 0.00 0.00 2426.39 1966.08 7895.53 00:26:37.626 { 00:26:37.626 "results": [ 00:26:37.626 { 00:26:37.626 "job": "nvme0n1", 00:26:37.626 "core_mask": "0x2", 00:26:37.626 "workload": "randwrite", 00:26:37.626 "status": "finished", 00:26:37.626 "queue_depth": 16, 00:26:37.626 "io_size": 131072, 00:26:37.626 "runtime": 2.003128, 00:26:37.626 "iops": 6582.205430706375, 00:26:37.626 "mibps": 822.7756788382969, 00:26:37.626 "io_failed": 0, 00:26:37.626 "io_timeout": 0, 00:26:37.626 "avg_latency_us": 2426.3909016378643, 00:26:37.626 "min_latency_us": 1966.08, 00:26:37.626 "max_latency_us": 7895.527619047619 00:26:37.626 } 00:26:37.626 ], 00:26:37.626 "core_count": 1 00:26:37.626 } 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:37.626 | select(.opcode=="crc32c") 00:26:37.626 | "\(.module_name) \(.executed)"' 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2641665 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2641665 ']' 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2641665 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2641665 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2641665' 00:26:37.626 killing process with pid 2641665 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2641665 00:26:37.626 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.626 00:26:37.626 Latency(us) 00:26:37.626 [2024-11-20T16:20:55.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.626 [2024-11-20T16:20:55.669Z] =================================================================================================================== 00:26:37.626 [2024-11-20T16:20:55.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.626 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2641665 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2639991 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2639991 ']' 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2639991 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2639991 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2639991' 00:26:37.885 killing process with pid 2639991 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2639991 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2639991 00:26:37.885 00:26:37.885 real 0m13.928s 00:26:37.885 user 0m26.602s 00:26:37.885 sys 0m4.549s 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.885 ************************************ 00:26:37.885 END TEST nvmf_digest_clean 00:26:37.885 ************************************ 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.885 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.144 ************************************ 00:26:38.144 START TEST nvmf_digest_error 00:26:38.144 ************************************ 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2642303 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2642303 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2642303 ']' 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.144 17:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.144 [2024-11-20 17:20:56.004312] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:38.144 [2024-11-20 17:20:56.004354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.144 [2024-11-20 17:20:56.083871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.144 [2024-11-20 17:20:56.121999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.144 [2024-11-20 17:20:56.122036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.144 [2024-11-20 17:20:56.122043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.144 [2024-11-20 17:20:56.122049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.144 [2024-11-20 17:20:56.122054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.144 [2024-11-20 17:20:56.122647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.144 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.144 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:38.144 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.144 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.144 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 [2024-11-20 17:20:56.199104] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 null0 00:26:38.404 [2024-11-20 17:20:56.293726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.404 [2024-11-20 17:20:56.317918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2642404 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2642404 /var/tmp/bperf.sock 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2642404 ']' 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.404 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.404 [2024-11-20 17:20:56.369073] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:38.404 [2024-11-20 17:20:56.369112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642404 ] 00:26:38.404 [2024-11-20 17:20:56.442110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.662 [2024-11-20 17:20:56.483631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.662 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.662 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:38.662 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.662 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.920 17:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.178 nvme0n1 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.178 17:20:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.178 Running I/O for 2 seconds... 00:26:39.178 [2024-11-20 17:20:57.129594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.129626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.129637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.140825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.140851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.149522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.149544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.149553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.162535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.162557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.162567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.171100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.171121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.182189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.182228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.194024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.194045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.203899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.203920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.203929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.178 [2024-11-20 17:20:57.212298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.178 [2024-11-20 17:20:57.212320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.178 [2024-11-20 17:20:57.212328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.223358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.223381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.223390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.233905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.233926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.233935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.242834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.242856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.242864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.251532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.251553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.251561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.260477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.260498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.260505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.271414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.271438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.271446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.280280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.280301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.280309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.292219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.292240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.292248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.302100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.302121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.302129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.310409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.310429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.310438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.320680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.320700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.320708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.330379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.330399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.330407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.340041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.340060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.340068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.349130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.349151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.349158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.357590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.357610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.367728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.367748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.367757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.376299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.376319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.376327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.437 [2024-11-20 17:20:57.385741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.437 [2024-11-20 17:20:57.385761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.437 [2024-11-20 17:20:57.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.396156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.396176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.396184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.405254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.405274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.405282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.415192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.415217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.423695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.423715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.423722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.432994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.433014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.433026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.445172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.445192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.445206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.456251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.456271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.456279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.467229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.467249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.467257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.438 [2024-11-20 17:20:57.475902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.438 [2024-11-20 17:20:57.475926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.438 [2024-11-20 17:20:57.475935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.485423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.485446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.485454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.494969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.494999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.495008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.505284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.505305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.515959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.515979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.515987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.529366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.529402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.541804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.541825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.550022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.550041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.550050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.560819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.560839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.560847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.569023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.569042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.569050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.579384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.579404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.579412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.589080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.589100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.589109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.598040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.598059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.598067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.607126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.607147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.607158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.617103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.617122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.617130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.625077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.625097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.625105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.635805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.635825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.635833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.646306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.646325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.646333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.656274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.656294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.656301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.667903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.667923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.667931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.677551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.677569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.677577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.686814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.686834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.686842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.698790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.698813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.698821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.706856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.706875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.706884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.718662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.718682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.718690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.697 [2024-11-20 17:20:57.731907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.697 [2024-11-20 17:20:57.731928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.697 [2024-11-20 17:20:57.731936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.744467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.744491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.744500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.754546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.754566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.754575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.764698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.764718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.764726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.772858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.772877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.772886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.782894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.782914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.782923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.790832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.790852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.790860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.802644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.802664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.802672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.815496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.815516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.815524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.826509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.826528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.826536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.838830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.838850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.838858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.847527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.847547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.847556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.860115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.860137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.956 [2024-11-20 17:20:57.860145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.956 [2024-11-20 17:20:57.872312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.956 [2024-11-20 17:20:57.872333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.872341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.880691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.880711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.880722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.892788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.892809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.892816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.904469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.904489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.916755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.916775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.916783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.925414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.925434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.937170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.937191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.937199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.949122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.949142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.949151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.959252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.959273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.959281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.967355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.967375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.967382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.978647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.978679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.957 [2024-11-20 17:20:57.987001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:39.957 [2024-11-20 17:20:57.987020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.957 [2024-11-20 17:20:57.987028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:57.998501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:57.998525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:57.998534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.008313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.008344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.016567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.016595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.026637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.026656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.026664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.037055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.037075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.037083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.045209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.054987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.055007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.055015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.064655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.064675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.064684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.072716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.072735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.072743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.082494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.082513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.082520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.091639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.091667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.100736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.100755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.100763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 24989.00 IOPS, 97.61 MiB/s [2024-11-20T16:20:58.259Z] [2024-11-20 17:20:58.112654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.216 [2024-11-20 17:20:58.112671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.216 [2024-11-20 17:20:58.112679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.216 [2024-11-20 17:20:58.121236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.121256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.121264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.132629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.132649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.132657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.141206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.141230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.141238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.151144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.151164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.151172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.160487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.160507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.160515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.171783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.171802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.171811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.180195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.180221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.180229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.191710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.191730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.191738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.203811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.203832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.203840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.212035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.212054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.212062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.222644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.222664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.222672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.235359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.235379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.235388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.244763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.244783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.244791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.217 [2024-11-20 17:20:58.252599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.217 [2024-11-20 17:20:58.252621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.217 [2024-11-20 17:20:58.252629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.475 [2024-11-20 17:20:58.263909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.475 [2024-11-20 17:20:58.263931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.475 [2024-11-20 17:20:58.263941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.475 [2024-11-20 17:20:58.275169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.475 [2024-11-20 17:20:58.275190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.475 [2024-11-20 17:20:58.275198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.475 [2024-11-20 17:20:58.287766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.475 [2024-11-20 17:20:58.287787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.475 [2024-11-20 17:20:58.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.475 [2024-11-20 17:20:58.298534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.475 [2024-11-20 17:20:58.298553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.475 [2024-11-20 17:20:58.298561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.307011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.307030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.307039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.319460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.319480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.319492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.330820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.330841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.330849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.338940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.338960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.338969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.350780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.350800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.350808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.360118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.360137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.360147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.369120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.369140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.369148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.378322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.378342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.387995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.388015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.388023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.398837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.398858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.398866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.407322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.407345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.407354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.418957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.418976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.418984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.429916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.429936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.429943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.442286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.442306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.442313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.453439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.453458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.453466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.462870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.462888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.471277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.471297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.471305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.480189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.480214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.480222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.489454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.489474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.489482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.498563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.498583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.476 [2024-11-20 17:20:58.507601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.476 [2024-11-20 17:20:58.507620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.476 [2024-11-20 17:20:58.507628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.518399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.518422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.518431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.527186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.527222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.535898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.535919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.535928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.544928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.544951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.544960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.554562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.554582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.554590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.562739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.562759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.562767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.574207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.574233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.574241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.583761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.583783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.593277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.593298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.593306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.603250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.603273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.603281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.611820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.611840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.611849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.621470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.621498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.631070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.735 [2024-11-20 17:20:58.631091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.735 [2024-11-20 17:20:58.631099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.735 [2024-11-20 17:20:58.640916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.640936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.640944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.649699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.649719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.649727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.658332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.658352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.658360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.668967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.668987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.668995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.678389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.678410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.678418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.687413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.687434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.687441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.697224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.697245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.697253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.706277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.706297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.706305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.715327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.715347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.715355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.724626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.724647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.724655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.736156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.736176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.736187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.743971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.743992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.744000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.755694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.755714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.755722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.736 [2024-11-20 17:20:58.766272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.736 [2024-11-20 17:20:58.766292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.736 [2024-11-20 17:20:58.766300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.776129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.776155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.776167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.785591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.785615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.794811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.794833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.794842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.803930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.803951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.813609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.813630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.813637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.822118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.822144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.822152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.830757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.830777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.830785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.840828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.840848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.840857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.850799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.850820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.850828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.859520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.859540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.859548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.870566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.870587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.870596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.880730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.880751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.880758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.888981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.889002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.899721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.899741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.899749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.911763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.911783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.911791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.921332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.921352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.921360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.930554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.930574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.930582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.942835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.942855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.942863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.955339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.955360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.955367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.966952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.966972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.966980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 17:20:58.976072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.995 [2024-11-20 17:20:58.976091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 17:20:58.976098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 17:20:58.985682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.996 [2024-11-20 17:20:58.985701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 17:20:58.985709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 17:20:58.996805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.996 [2024-11-20 17:20:58.996829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 17:20:58.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 17:20:59.005718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.996 [2024-11-20 17:20:59.005737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 17:20:59.005745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 17:20:59.016567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.996 [2024-11-20 17:20:59.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 17:20:59.016595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 17:20:59.025237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:40.996 [2024-11-20 17:20:59.025257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 17:20:59.025265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.254 [2024-11-20 17:20:59.038265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.254 [2024-11-20 17:20:59.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.254 [2024-11-20 17:20:59.038297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.254 [2024-11-20 17:20:59.051075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.254 [2024-11-20 17:20:59.051097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.254 [2024-11-20 17:20:59.051105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.254 [2024-11-20 17:20:59.062264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.254 [2024-11-20 17:20:59.062285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.254 [2024-11-20 17:20:59.062293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.254 [2024-11-20 17:20:59.071404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.254 [2024-11-20 17:20:59.071425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.254 [2024-11-20 17:20:59.071432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.255 [2024-11-20 17:20:59.083034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.255 [2024-11-20 17:20:59.083055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.255 [2024-11-20 17:20:59.083063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.255 [2024-11-20 17:20:59.095145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.255 [2024-11-20 17:20:59.095166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.255 [2024-11-20 17:20:59.095174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.255 [2024-11-20 17:20:59.104551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.255 [2024-11-20 17:20:59.104571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.255 [2024-11-20 17:20:59.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.255 [2024-11-20 17:20:59.113117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135a740) 00:26:41.255 [2024-11-20 17:20:59.113137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.255 [2024-11-20 17:20:59.113145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.255 25254.50 IOPS, 98.65 MiB/s 00:26:41.255 Latency(us) 00:26:41.255 [2024-11-20T16:20:59.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.255 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:41.255 nvme0n1 : 2.00 25268.66 98.71 0.00 0.00 5060.40 2371.78 19972.88 00:26:41.255 [2024-11-20T16:20:59.298Z] =================================================================================================================== 00:26:41.255 [2024-11-20T16:20:59.298Z] Total : 25268.66 98.71 0.00 0.00 5060.40 2371.78 19972.88 00:26:41.255 { 00:26:41.255 "results": [ 00:26:41.255 { 00:26:41.255 "job": "nvme0n1", 00:26:41.255 "core_mask": "0x2", 00:26:41.255 "workload": "randread", 00:26:41.255 "status": "finished", 00:26:41.255 "queue_depth": 128, 00:26:41.255 "io_size": 4096, 00:26:41.255 "runtime": 2.003945, 00:26:41.255 "iops": 25268.65757293738, 00:26:41.255 "mibps": 98.70569364428664, 00:26:41.255 "io_failed": 0, 00:26:41.255 "io_timeout": 0, 00:26:41.255 "avg_latency_us": 5060.401988006135, 00:26:41.255 "min_latency_us": 2371.7790476190476, 00:26:41.255 "max_latency_us": 19972.876190476192 00:26:41.255 } 00:26:41.255 ], 00:26:41.255 "core_count": 1 00:26:41.255 } 00:26:41.255 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.255 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.255 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.255 | .driver_specific 00:26:41.255 | .nvme_error 00:26:41.255 | .status_code 00:26:41.255 | .command_transient_transport_error' 00:26:41.255 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2642404 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2642404 ']' 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2642404 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642404 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642404' 00:26:41.513 killing process with pid 2642404 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2642404 00:26:41.513 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.513 00:26:41.513 Latency(us) 00:26:41.513 [2024-11-20T16:20:59.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.513 [2024-11-20T16:20:59.556Z] =================================================================================================================== 00:26:41.513 [2024-11-20T16:20:59.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2642404 00:26:41.513 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2642876 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2642876 /var/tmp/bperf.sock 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2642876 ']' 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.771 [2024-11-20 17:20:59.597988] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:41.771 [2024-11-20 17:20:59.598038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642876 ] 00:26:41.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:41.771 Zero copy mechanism will not be used. 00:26:41.771 [2024-11-20 17:20:59.669320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.771 [2024-11-20 17:20:59.706474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:41.771 17:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.030 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.598 nvme0n1 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:42.598 17:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.598 Zero copy mechanism will not be used. 00:26:42.598 Running I/O for 2 seconds... 00:26:42.598 [2024-11-20 17:21:00.547875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.547911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.547922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.553734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.553761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.553770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.560523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.560547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.560556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.567716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.567757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.574711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.574737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.574754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.582141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.582165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.582174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.589455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.589478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.589486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.596953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.596975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.596983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.604788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.604812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.604821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.608985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.609009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.609019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.613127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.613150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.613159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.618434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.618457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.618465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.623415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.623436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.623443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.628033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.628060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.628068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.598 [2024-11-20 17:21:00.631062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.598 [2024-11-20 17:21:00.631082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.598 [2024-11-20 17:21:00.631090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.599 [2024-11-20 17:21:00.636297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.599 [2024-11-20 17:21:00.636321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.599 [2024-11-20 17:21:00.636330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.641622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.641646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.641656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.647740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.647763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.653274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.653296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.653304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.658717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.658738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.658746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.664148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.664170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.664178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.669472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.669494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.669502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.858 [2024-11-20 17:21:00.674992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.858 [2024-11-20 17:21:00.675013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.858 [2024-11-20 17:21:00.675020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.680371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.680393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.680401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.685415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.685436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.685444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.690676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.690698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.690706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.695960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.695981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.695990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.701275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.701296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.701304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.706582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.706603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.706611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.711831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.711853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.711861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.717034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.717060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.722238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.722258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.722265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.727448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.727469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.727477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.732673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.732693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.732701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.737918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.737939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.737946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.743410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.743431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.743440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.748668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.748689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.748697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.753934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.753954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.753962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.759164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.759185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.759193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.764378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.764399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.769589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.769617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.774873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.774898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.774909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.780147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.780170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.780178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.785342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.785363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.785370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.790547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.790567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.790575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.795794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.795823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.800894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.800923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.859 [2024-11-20 17:21:00.800931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.859 [2024-11-20 17:21:00.803768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.859 [2024-11-20 17:21:00.803788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.803802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.809004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.809025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.809034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.814281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.814301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.814309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.819545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.819564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.819572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.824876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.824896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.824903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.830132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.830152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.830160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.835305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.835325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.835332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.840475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.840495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.840503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.845716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.845736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.845744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.850956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.850980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.850988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.856684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.856703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.861288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.861316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.861327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.866457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.866485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.871604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.871623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.876748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.876767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.876775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.882581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.882601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.882609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.887078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.887098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.887106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.860 [2024-11-20 17:21:00.892251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:42.860 [2024-11-20 17:21:00.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.860 [2024-11-20 17:21:00.892278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.898223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.898246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.120 [2024-11-20 17:21:00.898254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.903551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.120 [2024-11-20 17:21:00.903581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.908083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.908104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.120 [2024-11-20 17:21:00.908112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.913366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.120 [2024-11-20 17:21:00.913395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.917960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.917982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.120 [2024-11-20 17:21:00.917989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.120 [2024-11-20 17:21:00.923176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.120 [2024-11-20 17:21:00.923197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.923213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.928418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.928440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.928448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.933764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.933794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.939056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.939076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.939089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.944284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.949498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.949518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.949527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.954378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.954400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.954408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.959656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.959677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.964902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.964923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.964931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.970178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.970200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.970216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.975467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.975488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.975496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.980740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.985645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.985670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.985678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.990915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.990937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.990945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:00.995988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:00.996009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:00.996018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.001043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.001065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.001073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.006029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.006050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.011061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.011082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.011090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.015754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.015783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.020662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.020682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.020691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.025716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.025737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.025744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.030906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.030925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.030933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.036103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.036124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.036132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.041265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.041286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.041294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.046467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.046498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.046506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.051662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.051683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.051691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.056880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.056901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.121 [2024-11-20 17:21:01.056909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.121 [2024-11-20 17:21:01.062105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.121 [2024-11-20 17:21:01.062126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.062135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.067298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.067319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.067327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.072538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.072558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.072571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.077781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.077803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.077812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.083007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.083028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.083036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.088232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.088252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.088260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.093420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.093441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.093450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.098571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.098592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.098600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.103730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.103751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.103760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.108889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.108910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.114051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.114072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.114080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.119173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.119207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.124429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.124451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.129599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.129619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.129627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.134825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.134846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.134854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.140046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.140066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.140074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.145187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.145222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.145230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.150399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.150421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.122 [2024-11-20 17:21:01.155626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.122 [2024-11-20 17:21:01.155650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.122 [2024-11-20 17:21:01.155659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.160870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.160894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.160908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.166115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.166138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.166147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.171298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.171320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.171328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.176481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.176504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.181650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.181671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.186799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.186828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.191963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.191983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.191991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.197164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.197185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.197193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.202283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.202304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.202311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.207412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.207437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.207445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.212626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.212647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.212655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.217799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.217829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.223025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.223046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.223054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.228213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.228234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.228242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.233363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.233385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.233394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.238550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.238571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.238579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.242001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.242022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.246841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.246862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.246870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.252150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.252171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.252180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.257009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.257031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.257040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.262904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.262927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.268057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.268078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.268087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.273173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.273194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.273208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.278287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.278307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.278314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.283673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.283694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.283702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.289688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.289709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.289717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.296944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.296977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.303908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.303931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.303940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.310277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.310298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.310306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.317097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.317118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.317127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.323110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.323140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.331725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.380 [2024-11-20 17:21:01.331747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.380 [2024-11-20 17:21:01.331756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.380 [2024-11-20 17:21:01.339119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.339142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.339150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.347261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.347283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.347291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.354741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.354763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.354771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.362541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.362568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.362576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.369446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.369468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.369476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.375194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.375226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.375238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.381233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.381255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.381264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.386705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.386726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.386733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.392114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.392137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.392145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.397457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.397480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.397489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.402674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.402702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.407828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.407857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.413038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.413059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.413066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.381 [2024-11-20 17:21:01.418226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.381 [2024-11-20 17:21:01.418249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.381 [2024-11-20 17:21:01.418258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.423546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.423580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.428863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.428894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.434193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.434223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.434231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.439400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.439421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.439429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.444844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.444867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.444875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.450332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.450353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.450361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.455789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.455811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.455826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.461289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.461311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.461319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.466946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.466967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.472711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.472733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.472740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.478339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.478361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.478369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.483606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.483628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.483637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.488924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.488945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.488953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.494223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.494243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.494251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.499435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.499456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.499463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.504685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.504710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.509889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.509910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.509918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.515165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.515186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.520614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.520635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.520643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.526118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.526140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.531534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.531555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.531563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.536954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.536983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.542261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.542284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.641 5734.00 IOPS, 716.75 MiB/s [2024-11-20T16:21:01.684Z] [2024-11-20 17:21:01.548533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.641 [2024-11-20 17:21:01.548555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.641 [2024-11-20 17:21:01.548568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.641 [2024-11-20 17:21:01.553936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.553959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.553967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.559268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.559288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.559296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.564514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.564536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.564544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.569771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.569794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.569802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.575095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.575121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.575132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.580662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.580684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.580693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.586195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.586225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.586233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.591772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.591794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.591802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.597113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.597139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.597147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.602359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.602382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.602391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.607529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.607550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.607558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.612747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.612768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.612776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.617986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.618008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.618016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.623222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.623244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.623252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.628406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.628427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.628435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.633638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.633659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.633667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.638859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.638881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.638890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.644106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.644127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.644135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.649289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.649309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.649317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.654621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.654642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.654650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.660015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.660036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.660044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.665435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.665457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.665465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.670813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.670834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.670842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.642 [2024-11-20 17:21:01.676311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.642 [2024-11-20 17:21:01.676336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.642 [2024-11-20 17:21:01.676345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.681862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.681888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.681897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.687103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.687127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.687140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.692404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.692426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.692434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.697699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.697720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.697728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.702890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.702912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.702920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.708120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.708141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.708149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.713379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.902 [2024-11-20 17:21:01.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.902 [2024-11-20 17:21:01.713408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.902 [2024-11-20 17:21:01.718625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.718648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.718655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.723830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.723851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.723858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.729266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.729288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.729296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.734700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.734726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.740129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.740158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.745423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.745445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.745453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.750614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.750636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.750644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.755684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.755706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.755714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.760963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.760984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.766224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.766246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.766254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.771246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.771268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.771277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.776171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.776193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.776207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.781160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.781182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.781190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.786039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.786068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.790973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.790993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.791001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.796034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.796055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.801236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.801256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.801264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.806574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.806596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.806604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.812089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.812111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.812119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.817667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.817689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.817698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.823618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.823640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.823652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.829059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.829081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.829089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.834452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.834475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.834483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.839850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.839871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.839879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.845098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.845119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.845127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.903 [2024-11-20 17:21:01.850239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.903 [2024-11-20 17:21:01.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.903 [2024-11-20 17:21:01.850268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.855379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.855400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.855408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.860380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.860400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.860407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.865513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.865534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.865541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.870719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.870743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.870750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.875922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.875943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.875950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.881146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.881174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.886259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.886281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.886289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.891536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.891556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.891564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.897003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.897023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.897031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.902594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.902615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.902623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.908053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.908074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.908081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.913440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.913471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.913478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.918863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.918883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.918891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.924712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.924733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.924741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.930188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.930215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.930223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.935687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.935707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.935714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.904 [2024-11-20 17:21:01.941139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:43.904 [2024-11-20 17:21:01.941167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.904 [2024-11-20 17:21:01.941179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.946661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.946684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.952028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.952050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.952058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.957421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.957443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.957452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.962914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.962936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.962948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.968288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.968309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.968317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.973730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.973750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.973758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.979060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.979082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.979089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.984613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.984634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.984643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.990060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.990082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.990090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:01.995607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:01.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:01.995636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:02.001190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.163 [2024-11-20 17:21:02.001218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.163 [2024-11-20 17:21:02.001227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.163 [2024-11-20 17:21:02.006654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.006675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.006683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.011995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.012015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.012023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.017341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.017361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.022714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.022735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.022743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.029087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.029108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.029116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.036481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.036503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.036511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.044069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.044091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.044099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.052651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.052673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.052681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.060406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.060428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.060437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.067855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.067876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.067889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.075366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.075387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.075395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.082910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.082930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.082939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.091137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.091158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.091166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.099170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.099191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.099199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.106551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.106573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.106581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.114126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.114147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.114156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.122536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.122558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.122566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.130140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.130162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.130170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.138232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.138258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.138266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.145661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.145684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.145692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.152041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.152072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.158585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.158614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.165808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.165829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.165838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.173055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.173075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.173090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.180598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.180620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.180629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.188722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.188744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.188752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.196254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.196292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.164 [2024-11-20 17:21:02.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.164 [2024-11-20 17:21:02.202320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.164 [2024-11-20 17:21:02.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.165 [2024-11-20 17:21:02.202353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.208082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.208106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.213775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.213797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.213806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.217430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.217450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.217458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.222059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.222087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.227489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.227510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.227518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.232799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.232820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.232828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.237976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.237996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.238004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.243124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.243145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.243156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.248414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.248445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.248453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.254153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.424 [2024-11-20 17:21:02.254174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.424 [2024-11-20 17:21:02.254182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.424 [2024-11-20 17:21:02.259434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.259454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.259462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.264923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.264943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.264951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.270703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.270725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.270732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.276627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.276648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.276657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.282245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.282266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.282274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.287580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.287601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.287609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.293937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.293964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.293972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.301180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.301208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.301217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.308174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.308196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.308210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.316130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.316152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.316161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.323885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.323908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.323916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.330652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.330673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.330682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.336343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.336372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.341667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.341687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.341694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.346789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.346819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.351801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.351821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.351829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.357032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.357052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.357060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.362275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.362295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.367796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.367817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.373521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.373541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.373549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.379473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.379494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.379502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.384779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.384800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.384807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.389992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.390012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.390021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.395195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.395222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.395234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.400431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.400452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.400460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.405649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.405669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.410506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.425 [2024-11-20 17:21:02.410527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.425 [2024-11-20 17:21:02.410535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.425 [2024-11-20 17:21:02.415730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.415751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.415759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.420989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.421009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.421017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.426274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.426294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.426301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.431465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.431485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.431493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.436684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.436703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.436711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.441894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.441918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.441926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.447124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.447144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.447152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.452329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.452350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.452358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.457485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.457505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.457513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.426 [2024-11-20 17:21:02.462815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.426 [2024-11-20 17:21:02.462837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.426 [2024-11-20 17:21:02.462846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.468144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.468167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.468175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.473433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.473454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.473463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.478530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.478550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.478559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.483728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.483757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.488909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.488930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.488938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.494017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.494038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.494045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.499187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.499223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.504386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.504408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.504416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.509568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.509589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.509596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.514812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.514833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.514840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.520017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.520038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.520046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.525220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.525241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.525249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.530431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.530452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.530464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.535620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.535641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.535648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.540683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.540710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.685 [2024-11-20 17:21:02.545835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9ec4e0) 00:26:44.685 [2024-11-20 17:21:02.545856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.685 [2024-11-20 17:21:02.545864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.685 5603.00 IOPS, 700.38 MiB/s 00:26:44.685 Latency(us) 00:26:44.685 [2024-11-20T16:21:02.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.685 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:44.685 nvme0n1 : 2.00 5606.11 700.76 0.00 0.00 2851.49 620.25 8550.89 00:26:44.685 [2024-11-20T16:21:02.728Z] =================================================================================================================== 00:26:44.685 [2024-11-20T16:21:02.728Z] Total : 5606.11 700.76 0.00 0.00 2851.49 620.25 8550.89 00:26:44.685 { 00:26:44.685 "results": [ 00:26:44.685 { 00:26:44.685 "job": "nvme0n1", 00:26:44.685 "core_mask": "0x2", 00:26:44.685 "workload": "randread", 00:26:44.685 "status": "finished", 00:26:44.685 "queue_depth": 16, 00:26:44.685 "io_size": 131072, 00:26:44.685 "runtime": 2.001745, 00:26:44.685 "iops": 5606.108670185264, 00:26:44.685 "mibps": 700.763583773158, 00:26:44.685 "io_failed": 0, 00:26:44.685 "io_timeout": 0, 00:26:44.685 "avg_latency_us": 2851.4920793339616, 00:26:44.685 "min_latency_us": 620.2514285714286, 00:26:44.685 "max_latency_us": 8550.887619047619 00:26:44.685 } 00:26:44.685 ], 00:26:44.685 "core_count": 1 00:26:44.685 } 00:26:44.685 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:44.685 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:44.685 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:44.685 | .driver_specific 00:26:44.685 | .nvme_error 00:26:44.685 | .status_code 00:26:44.685 | .command_transient_transport_error' 00:26:44.685 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 362 > 0 )) 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2642876 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2642876 ']' 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2642876 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642876 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642876' 00:26:44.944 killing process with pid 2642876 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2642876 00:26:44.944 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.944 00:26:44.944 Latency(us) 00:26:44.944 [2024-11-20T16:21:02.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.944 [2024-11-20T16:21:02.987Z] =================================================================================================================== 00:26:44.944 [2024-11-20T16:21:02.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.944 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2642876 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2643562 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2643562 /var/tmp/bperf.sock 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2643562 ']' 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.203 17:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.203 [2024-11-20 17:21:03.041195] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:45.203 [2024-11-20 17:21:03.041263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2643562 ] 00:26:45.203 [2024-11-20 17:21:03.115653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.203 [2024-11-20 17:21:03.158667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.462 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.720 nvme0n1 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:45.720 17:21:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:45.978 Running I/O for 2 seconds... 00:26:45.978 [2024-11-20 17:21:03.855520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ede470 00:26:45.979 [2024-11-20 17:21:03.856882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.862901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016edece0 00:26:45.979 [2024-11-20 17:21:03.863783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.863803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.872593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee23b8 00:26:45.979 [2024-11-20 17:21:03.873567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.873586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.882803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016efac10 00:26:45.979 [2024-11-20 17:21:03.883958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.883978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.893098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee2c28 00:26:45.979 [2024-11-20 17:21:03.894649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.894670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.899532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016efac10 00:26:45.979 [2024-11-20 17:21:03.900302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.900321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.909412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee8d30 00:26:45.979 [2024-11-20 17:21:03.910665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.910684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.920286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee1b48 00:26:45.979 [2024-11-20 17:21:03.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.921889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.927590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee3060 00:26:45.979 [2024-11-20 17:21:03.928689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.928707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.936543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee6738 00:26:45.979 [2024-11-20 17:21:03.937823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.937841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.944331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016eed0b0 00:26:45.979 [2024-11-20 17:21:03.945040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.945058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.953815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee49b0 00:26:45.979 [2024-11-20 17:21:03.954622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.954640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.963033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016eed4e8 00:26:45.979 [2024-11-20 17:21:03.963868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.963887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.971916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef1868 00:26:45.979 [2024-11-20 17:21:03.972757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.972775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.981993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016efcdd0 00:26:45.979 [2024-11-20 17:21:03.982982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.983001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:03.991317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef7100 00:26:45.979 [2024-11-20 17:21:03.992058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:03.992077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:04.000777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee84c0 00:26:45.979 [2024-11-20 17:21:04.001847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:04.001867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.979 [2024-11-20 17:21:04.010193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee73e0 00:26:45.979 [2024-11-20 17:21:04.011263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.979 [2024-11-20 17:21:04.011282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.019470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016edf550 00:26:46.238 [2024-11-20 17:21:04.020553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.020575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.028076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016eea248 00:26:46.238 [2024-11-20 17:21:04.029122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.029143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.037735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016efd208 00:26:46.238 [2024-11-20 17:21:04.038911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.038930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.046298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef6cc8 00:26:46.238 [2024-11-20 17:21:04.047107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.047126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.055365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef5be8 00:26:46.238 [2024-11-20 17:21:04.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.056207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.064723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee99d8 00:26:46.238 [2024-11-20 17:21:04.065313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.065332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.075125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016eeff18 00:26:46.238 [2024-11-20 17:21:04.076545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.076563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.083572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee95a0 00:26:46.238 [2024-11-20 17:21:04.084668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.084685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.092564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee7818 00:26:46.238 [2024-11-20 17:21:04.093608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.093626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.100884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee1710 00:26:46.238 [2024-11-20 17:21:04.102171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.102189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:46.238 [2024-11-20 17:21:04.109283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee6738 00:26:46.238 [2024-11-20 17:21:04.110012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.238 [2024-11-20 17:21:04.110030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.117924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016edfdc0 00:26:46.239 [2024-11-20 17:21:04.118616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.118635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.127791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef9f68 00:26:46.239 [2024-11-20 17:21:04.128614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.128640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.139200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee84c0 00:26:46.239 [2024-11-20 17:21:04.140544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.140563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.147618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ede8a8 00:26:46.239 [2024-11-20 17:21:04.148874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.148893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.157656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016efa3a0 00:26:46.239 [2024-11-20 17:21:04.158817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.158835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.165269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ef0bc0 00:26:46.239 [2024-11-20 17:21:04.166049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.166066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.174924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.175070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.175091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.184351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.184495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.184514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.193763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.193907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.193925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.203185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.203337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.203354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.212632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.212784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.212801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.222049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.222193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.222215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.231466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.231629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.240893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.241036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.241053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.250357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.250504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.250522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.259784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.259924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.259942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.239 [2024-11-20 17:21:04.269232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.239 [2024-11-20 17:21:04.269381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.239 [2024-11-20 17:21:04.269399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.278835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.279006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.288408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.288554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.288574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.297888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.298039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.298060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.307851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.308005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.308025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.317443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.317587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.317604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.326941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.327085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.327102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.336426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.336572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.336589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.345833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.345974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.345992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.355271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.355432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.364698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.364840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.364874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.374413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.374564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.374590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.383987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.384148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.384166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.393531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.393677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.393693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.402993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.403136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.403152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.412453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.412597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.412614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.421934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.498 [2024-11-20 17:21:04.422095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.498 [2024-11-20 17:21:04.422111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.498 [2024-11-20 17:21:04.431413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.431560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.431576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.440864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.441009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.441030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.450299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.450447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.450465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.459725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.459872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.459889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.469176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.469344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.469362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.478676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.478819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.478836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.488103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.488254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.488271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.497507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.497650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.497667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.506924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.507067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.507084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.516319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.516461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.525792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.525934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.525958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.499 [2024-11-20 17:21:04.535239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.499 [2024-11-20 17:21:04.535387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.499 [2024-11-20 17:21:04.535407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.544875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.545019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.545040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.554318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.554463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.554483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.563748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.563893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.563912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.573145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.573314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.573332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.582662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.582807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.582824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.592043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.592187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.601448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.601591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.601608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.610913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.611055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.611073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.620447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.620596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.620625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.630082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.630251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.630269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.639735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.639895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.639911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.649246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.649391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.649410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.658651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.658795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.658812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.668040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.668182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.668198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.677448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.677609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.677627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.686895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.687037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.687054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.696307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.758 [2024-11-20 17:21:04.696451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.758 [2024-11-20 17:21:04.696470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.758 [2024-11-20 17:21:04.705731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.705875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.705893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.715216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.715361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.715378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.724621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.724763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.724780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.734041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.734181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.734197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.743456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.743599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.752858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.753000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.753017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.762282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.762426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.762443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.771691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.771831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.771854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.781103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.781266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.781283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.759 [2024-11-20 17:21:04.790566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:46.759 [2024-11-20 17:21:04.790709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.759 [2024-11-20 17:21:04.790726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.800249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.800401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.800423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.809768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.809910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.809930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.819176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.819326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.819346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.828586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.828729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.828747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.837996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.838143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.847406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 27244.00 IOPS, 106.42 MiB/s [2024-11-20T16:21:05.061Z] [2024-11-20 17:21:04.856816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.856958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.856976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.866238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.866387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.866407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.875653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.875794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.875828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.885340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.885484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.885501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.894907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.895054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.895072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.904496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.904657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.904674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.914056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.914198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.923593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.923736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.923758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.933007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.933150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.933167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.942398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.942541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.951822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.951964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.961215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.961360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.961376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.970658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.018 [2024-11-20 17:21:04.970800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.018 [2024-11-20 17:21:04.970817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.018 [2024-11-20 17:21:04.980059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:04.980207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:04.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:04.989496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:04.989638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:04.989655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:04.998891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:04.999035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:04.999053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.008325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.008469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.008489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.017838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.017981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.017998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.027339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.027484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.027507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.036765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.036911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.036928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.046192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.046346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.046363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.019 [2024-11-20 17:21:05.055700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.019 [2024-11-20 17:21:05.055847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.019 [2024-11-20 17:21:05.055868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.277 [2024-11-20 17:21:05.065369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.277 [2024-11-20 17:21:05.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.277 [2024-11-20 17:21:05.065537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.277 [2024-11-20 17:21:05.074819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.277 [2024-11-20 17:21:05.074961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.277 [2024-11-20 17:21:05.074979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.277 [2024-11-20 17:21:05.084267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.277 [2024-11-20 17:21:05.084416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.277 [2024-11-20 17:21:05.084433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.277 [2024-11-20 17:21:05.093699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.277 [2024-11-20 17:21:05.093849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.277 [2024-11-20 17:21:05.093867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.277 [2024-11-20 17:21:05.103139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.103292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.103309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.112567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.112709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.112730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.122147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.122320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.122338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.131657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.131816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.131833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.141348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.141495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.141514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.150935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.151097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.151115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.160557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.160721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.160738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.170098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.170247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.170265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.179504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.179646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.179665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.188938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.189079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.198350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.198513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.198531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.207817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.207966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.207983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.217246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.217393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.217409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.226698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.226843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.226862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.236145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.236295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.236314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.245566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.245710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.245727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.254961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.255103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.255121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.264385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.264532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.264549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.273797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.273939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.273956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.283219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.283366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.283383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.292702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.292846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.292863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.302113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.302280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.302297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.278 [2024-11-20 17:21:05.311568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.278 [2024-11-20 17:21:05.311710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.278 [2024-11-20 17:21:05.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.537 [2024-11-20 17:21:05.321223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.537 [2024-11-20 17:21:05.321370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.537 [2024-11-20 17:21:05.321390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.537 [2024-11-20 17:21:05.330779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.537 [2024-11-20 17:21:05.330933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.537 [2024-11-20 17:21:05.330953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.537 [2024-11-20 17:21:05.340251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.340396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.340414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.349640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.349782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.349800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.359067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.359215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.359236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.368465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.368607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.368624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.377891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.378035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.378059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.387290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.387449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.396986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.397148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.397166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.406588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.406751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.416236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.416381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.416397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.425669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.425810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.425827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.435070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.435219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.435236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.444554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.444701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.444718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.453970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.454112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.454130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.463385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.463527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.463544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.472804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.472957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.472973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.482267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.482411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.482429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.491689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.491837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.491854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.501088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.501237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.501254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.510564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.510710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.510726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.519977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.520120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.520137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.529402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.529548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.529566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.538808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.538950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.538967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.548277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.548419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.548436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.557743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.557902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.557919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.567231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.567375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.567392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.538 [2024-11-20 17:21:05.576789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.538 [2024-11-20 17:21:05.576940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.538 [2024-11-20 17:21:05.576961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.586416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.586572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.595846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.595990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.596007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.605325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.605488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.614785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.614927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.614944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.624320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.624493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.633750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.633893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.633911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.643187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.643356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.643373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.652866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.653027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.662480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.662640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.662657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.672069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.672217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.672235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.681599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.681740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.681757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.691013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.691164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.691182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.700467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.700611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.709895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.710037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.710054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.719342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.719505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.728764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.728909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.728927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.738387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.738551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.747863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.748011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.798 [2024-11-20 17:21:05.748029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.798 [2024-11-20 17:21:05.757809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.798 [2024-11-20 17:21:05.757954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.767248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.767393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.767411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.776651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.776794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.776812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.786094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.786246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.786263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.795511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.795655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.804940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.805084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.805102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.814362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.814505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.814530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.823863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.824033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.824050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:47.799 [2024-11-20 17:21:05.833363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:47.799 [2024-11-20 17:21:05.833510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.799 [2024-11-20 17:21:05.833530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.057 [2024-11-20 17:21:05.843036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:48.057 [2024-11-20 17:21:05.843216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.057 [2024-11-20 17:21:05.843235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.057 27101.00 IOPS, 105.86 MiB/s [2024-11-20T16:21:06.100Z] [2024-11-20 17:21:05.852563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c180) with pdu=0x200016ee0630 00:26:48.057 [2024-11-20 17:21:05.852704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.057 [2024-11-20 17:21:05.852725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.057 00:26:48.057 Latency(us) 00:26:48.057 [2024-11-20T16:21:06.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.057 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:48.057 nvme0n1 : 2.00 27098.86 105.85 0.00 0.00 4715.58 2044.10 10860.25 00:26:48.057 [2024-11-20T16:21:06.100Z] =================================================================================================================== 00:26:48.057 [2024-11-20T16:21:06.100Z] Total : 27098.86 105.85 0.00 0.00 4715.58 2044.10 10860.25 00:26:48.057 { 00:26:48.057 "results": [ 00:26:48.057 { 00:26:48.057 "job": "nvme0n1", 00:26:48.057 "core_mask": "0x2", 00:26:48.057 "workload": "randwrite", 00:26:48.057 "status": "finished", 00:26:48.057 "queue_depth": 128, 00:26:48.057 "io_size": 4096, 00:26:48.057 "runtime": 2.004586, 00:26:48.057 "iops": 27098.86230872609, 00:26:48.058 "mibps": 105.85493089346129, 00:26:48.058 "io_failed": 0, 00:26:48.058 "io_timeout": 0, 00:26:48.058 "avg_latency_us": 4715.584585601554, 00:26:48.058 "min_latency_us": 2044.0990476190477, 00:26:48.058 "max_latency_us": 10860.251428571428 00:26:48.058 } 00:26:48.058 ], 00:26:48.058 "core_count": 1 00:26:48.058 } 00:26:48.058 17:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:48.058 17:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:48.058 17:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:48.058 | .driver_specific 00:26:48.058 | .nvme_error 00:26:48.058 | .status_code 00:26:48.058 | .command_transient_transport_error' 00:26:48.058 17:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2643562 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2643562 ']' 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2643562 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.058 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2643562 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2643562' 00:26:48.316 killing process with pid 2643562 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2643562 00:26:48.316 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.316 00:26:48.316 Latency(us) 00:26:48.316 [2024-11-20T16:21:06.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.316 [2024-11-20T16:21:06.359Z] =================================================================================================================== 00:26:48.316 [2024-11-20T16:21:06.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2643562 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2644440 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2644440 /var/tmp/bperf.sock 00:26:48.316 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2644440 ']' 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.317 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.317 [2024-11-20 17:21:06.328407] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:26:48.317 [2024-11-20 17:21:06.328454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644440 ] 00:26:48.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.317 Zero copy mechanism will not be used. 00:26:48.575 [2024-11-20 17:21:06.404005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.575 [2024-11-20 17:21:06.446099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.575 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.575 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:48.576 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.576 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.833 17:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.091 nvme0n1 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:49.091 17:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.091 Zero copy mechanism will not be used. 00:26:49.091 Running I/O for 2 seconds... 00:26:49.091 [2024-11-20 17:21:07.122678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.091 [2024-11-20 17:21:07.122782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.091 [2024-11-20 17:21:07.122812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.091 [2024-11-20 17:21:07.127543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.091 [2024-11-20 17:21:07.127638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.091 [2024-11-20 17:21:07.127666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.133324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.133516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.139518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.139672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.139693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.146037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.146199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.146225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.152429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.152608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.152627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.158933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.159120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.351 [2024-11-20 17:21:07.165144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.351 [2024-11-20 17:21:07.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.351 [2024-11-20 17:21:07.165309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.171542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.171697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.171716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.177780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.177932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.177951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.184057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.184212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.184230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.190348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.190506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.190525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.197775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.197935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.197955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.204935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.205007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.205026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.212219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.212343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.212362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.219607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.219774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.219793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.227544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.227676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.227695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.234464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.234604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.234623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.239473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.239543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.239562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.244188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.244262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.244280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.248786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.248847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.248865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.253361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.253415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.257862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.257961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.257978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.262395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.262463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.262481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.266959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.267030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.267052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.271512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.271591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.276010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.276070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.276088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.280539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.280599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.280618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.285027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.285099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.285116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.289545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.289616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.289634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.294087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.294160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.294178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.298664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.298731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.298750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.303304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.352 [2024-11-20 17:21:07.303359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.352 [2024-11-20 17:21:07.303378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.352 [2024-11-20 17:21:07.307838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.307924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.307942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.312412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.312489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.317007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.317082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.317101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.321543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.321625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.321643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.326139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.326221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.326239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.330653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.330718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.330736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.335169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.335232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.335250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.339632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.339689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.339707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.344097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.344151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.344169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.348611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.348670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.348688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.353088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.353148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.353166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.357580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.357650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.357669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.362038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.362108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.362126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.366591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.366655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.366673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.371085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.371149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.371168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.375604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.375660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.375680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.380110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.380183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.380208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.384652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.384725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.384752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.353 [2024-11-20 17:21:07.389153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.353 [2024-11-20 17:21:07.389217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.353 [2024-11-20 17:21:07.389239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.611 [2024-11-20 17:21:07.393645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.393711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.393733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.398242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.398295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.398316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.402766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.402829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.402849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.407860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.407982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.408002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.413950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.414124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.414144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.420238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.420396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.420415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.426555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.426708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.426726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.432884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.433025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.439532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.439695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.439713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.446350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.446520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.446539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.452885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.453053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.453072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.459289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.459449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.459468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.466012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.466173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.466192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.472277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.472438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.472456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.478506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.478671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.484751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.484917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.484936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.491180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.491335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.491354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.497720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.497873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.497892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.503993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.504096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.504114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.509876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.510049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.510067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.516149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.516342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.522510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.522670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.522688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.528669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.528866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.535247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.535407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.535426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.541995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.542170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.542192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.548677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.612 [2024-11-20 17:21:07.548867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.612 [2024-11-20 17:21:07.548894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.612 [2024-11-20 17:21:07.555324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.555486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.555504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.561680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.561859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.561877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.568245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.568391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.568410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.574662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.574822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.574845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.581282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.581437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.581467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.587784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.587960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.587979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.594573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.594746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.594765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.601344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.601528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.607864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.608036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.608055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.614878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.615035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.615054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.621696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.621851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.621869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.629447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.629595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.629630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.635718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.635784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.635802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.642187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.642315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.642333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.613 [2024-11-20 17:21:07.648687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.613 [2024-11-20 17:21:07.648760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.613 [2024-11-20 17:21:07.648781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.872 [2024-11-20 17:21:07.653824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.653911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.658703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.658772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.658793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.663307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.663387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.663406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.667934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.667990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.668008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.672493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.672567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.672585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.677027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.677087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.677105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.681562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.681624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.681642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.686149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.686220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.686239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.690697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.690791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.690810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.695493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.695602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.695623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.700610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.700675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.700694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.705787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.705850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.705868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.711105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.711168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.711186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.716100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.716188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.716212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.721669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.721754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.726567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.726624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.731841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.731907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.731926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.736818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.736885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.736903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.741876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.742019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.742038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.746782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.746833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.746852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.752390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.752466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.752484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.757700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.757773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.757790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.762622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.762676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.762694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.768160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.768226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.768245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.773427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.773480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.773499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.778546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.778825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.873 [2024-11-20 17:21:07.778846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.873 [2024-11-20 17:21:07.783637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.873 [2024-11-20 17:21:07.783919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.783939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.788504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.788786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.788806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.793584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.793866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.793886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.798633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.798904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.798923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.803539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.803815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.803836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.809306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.809582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.809602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.814379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.814648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.814667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.819931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.820213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.820232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.825504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.825793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.830991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.831262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.831285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.835498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.835778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.835798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.840048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.840330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.840350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.844409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.844638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.844658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.848843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.849107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.849126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.853296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.853549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.853569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.857904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.858162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.858182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.862457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.862735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.862754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.866846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.867127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.867146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.871148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.871423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.871442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.875424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.875681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.875701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.879833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.880119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.880139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.884782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.885055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.885075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.890018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.890307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.890326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.895321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.895591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.895610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.899964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.900267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.900287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.904718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.905012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.905032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.874 [2024-11-20 17:21:07.909414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:49.874 [2024-11-20 17:21:07.909708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.874 [2024-11-20 17:21:07.909730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.914379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.914658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.914680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.919049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.919326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.919348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.923571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.923839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.923859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.928483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.928758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.928778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.933432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.933715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.938306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.938557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.943099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.943361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.943381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.947652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.947919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.947938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.952037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.952318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.952342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.956467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.956736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.956756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.961309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.961588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.961607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.966382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.966647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.966666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.970996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.971273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.971292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.975521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.975801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.135 [2024-11-20 17:21:07.975821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.135 [2024-11-20 17:21:07.979816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.135 [2024-11-20 17:21:07.980092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:07.980112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:07.984147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:07.984434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:07.984453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:07.989278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:07.989568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:07.989588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:07.995626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:07.995956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:07.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.002580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.002911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.002931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.009579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.009931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.009951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.017017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.017361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.024424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.024756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.031786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.032157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.032177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.039750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.040115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.047466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.047788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.054592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.054874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.054894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.061875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.062176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.062195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.068879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.069147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.069167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.076315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.076572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.076592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.083556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.083821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.083841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.090550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.090863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.090883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.097661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.097923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.097943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.103489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.103772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.103793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.108883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.109181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.109208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.113779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.114032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.114056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 5599.00 IOPS, 699.88 MiB/s [2024-11-20T16:21:08.179Z] [2024-11-20 17:21:08.119928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.120185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.120211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.124338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.124567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.124585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.128511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.128732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.128751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.132604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.132850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.132870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.137187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.137423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.142648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.142974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.136 [2024-11-20 17:21:08.142993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.136 [2024-11-20 17:21:08.148016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.136 [2024-11-20 17:21:08.148263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.148283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.137 [2024-11-20 17:21:08.152462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.137 [2024-11-20 17:21:08.152727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.152747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.137 [2024-11-20 17:21:08.156938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.137 [2024-11-20 17:21:08.157192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.157216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.137 [2024-11-20 17:21:08.161590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.137 [2024-11-20 17:21:08.161834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.161854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.137 [2024-11-20 17:21:08.166300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.137 [2024-11-20 17:21:08.166535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.166554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.137 [2024-11-20 17:21:08.171056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.137 [2024-11-20 17:21:08.171291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.137 [2024-11-20 17:21:08.171313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.175705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.175961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.175984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.180238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.180501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.180523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.184889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.185143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.185163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.189785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.190056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.190076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.195143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.195377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.195397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.199836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.200061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.200081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.204468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.204715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.204735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.209578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.209810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.209829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.214225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.214484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.214504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.218955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.219182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.219207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.223475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.223712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.223731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.228076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.228306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.228325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.232705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.232942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.232961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.237811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.238088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.238112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.243271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.243504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.248046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.248292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.248311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.252829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.253055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.253074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.257684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.257909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.257928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.262190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.262423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.266346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.266594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.266613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.270576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.270771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.270790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.274931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.275131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.275149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.279236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.279448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.279466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.398 [2024-11-20 17:21:08.283233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.398 [2024-11-20 17:21:08.283458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.398 [2024-11-20 17:21:08.283477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.287503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.287724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.287743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.292648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.292851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.292869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.297065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.297293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.297312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.301091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.301335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.301354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.305029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.305256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.305275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.309046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.309294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.309313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.313257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.313473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.313492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.317282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.317487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.317506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.321242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.321457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.321475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.325233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.325426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.325445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.329333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.329552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.329571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.333463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.333674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.333693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.337470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.337687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.337706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.341594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.341807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.341826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.346035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.346303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.346321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.350437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.350661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.355730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.356019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.356038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.362214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.362451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.362471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.367493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.367718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.367737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.373161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.373343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.373362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.378974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.379182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.379207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.384792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.385044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.385063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.391185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.391531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.391550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.397886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.398171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.398191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.404064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.404353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.404373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.409677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.409930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.409949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.399 [2024-11-20 17:21:08.414942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.399 [2024-11-20 17:21:08.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.399 [2024-11-20 17:21:08.415189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.400 [2024-11-20 17:21:08.419995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.400 [2024-11-20 17:21:08.420226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.400 [2024-11-20 17:21:08.420244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.400 [2024-11-20 17:21:08.424422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.400 [2024-11-20 17:21:08.424632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.400 [2024-11-20 17:21:08.424651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.400 [2024-11-20 17:21:08.428647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.400 [2024-11-20 17:21:08.428857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.400 [2024-11-20 17:21:08.428876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.400 [2024-11-20 17:21:08.432838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.400 [2024-11-20 17:21:08.433061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.400 [2024-11-20 17:21:08.433082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.659 [2024-11-20 17:21:08.437382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.437608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.437630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.441985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.442186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.442211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.446896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.447103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.447125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.451421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.451555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.451574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.455982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.456128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.456147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.460333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.460526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.460546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.465189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.465342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.465360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.469558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.469718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.469736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.473681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.473858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.473877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.477543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.477720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.481359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.481557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.481580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.485270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.485459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.485478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.489236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.489417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.489436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.493007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.493218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.493236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.496774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.496963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.496982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.501119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.501309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.501329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.505700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.505841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.509858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.510047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.510064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.513844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.514009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.514027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.518293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.660 [2024-11-20 17:21:08.518480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.660 [2024-11-20 17:21:08.518497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.660 [2024-11-20 17:21:08.523671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.523980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.529233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.529414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.535760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.535901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.535919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.542197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.542413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.548706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.548883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.555057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.555210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.555229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.561090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.561324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.561344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.566759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.566962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.566981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.572125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.572360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.572379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.577336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.577571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.577591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.582519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.582829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.582848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.587896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.588116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.593383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.593697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.593717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.598728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.598950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.598970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.603805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.604032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.604055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.609042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.609358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.609378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.613209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.613420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.613444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.617167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.617387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.617407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.621288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.621485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.621505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.625376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.625615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.625635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.629485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.629720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.629740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.633577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.637616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.637803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.637822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.661 [2024-11-20 17:21:08.641632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.661 [2024-11-20 17:21:08.641816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.661 [2024-11-20 17:21:08.641835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.645604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.645790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.645810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.650678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.651000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.651021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.655603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.655790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.655809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.659916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.660156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.660176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.664142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.664360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.664382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.668395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.668623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.668644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.672513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.672715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.672734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.676594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.676844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.676864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.680582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.680797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.684604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.684835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.684855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.688618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.688798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.688816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.692589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.692770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.692788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.662 [2024-11-20 17:21:08.696656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.662 [2024-11-20 17:21:08.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.662 [2024-11-20 17:21:08.696887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.921 [2024-11-20 17:21:08.700834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.921 [2024-11-20 17:21:08.701059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.701082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.704901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.705088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.705110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.708918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.709122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.709148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.712872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.713052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.713071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.717376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.717602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.717621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.722852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.723068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.723092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.727985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.728283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.728303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.733059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.733379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.738174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.738434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.738454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.743574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.743778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.743798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.748920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.749137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.749158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.754474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.754786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.754806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.759553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.759743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.759770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.763627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.763809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.767701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.767888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.767907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.771813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.772035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.772055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.775875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.776097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.779982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.780166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.780185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.784151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.784357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.784377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.788310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.788535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.792378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.792628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.792648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.797072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.797310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.801526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.801731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.801751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.805681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.805883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.805903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.809894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.810076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.810096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.814032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.814234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.814253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.818146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.818331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.922 [2024-11-20 17:21:08.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.922 [2024-11-20 17:21:08.822108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.922 [2024-11-20 17:21:08.822311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.822337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.826069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.826299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.826319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.830233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.830459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.830478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.834347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.834525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.834543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.838442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.838658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.838681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.843273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.843511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.843530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.847901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.848076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.852962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.853254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.853275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.858789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.859015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.859035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.864749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.864893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.864911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.871155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.871403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.871424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.877522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.877721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.877741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.882582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.882762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.882780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.886704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.886893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.886911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.890742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.890954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.894824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.895009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.895027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.898855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.899060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.902873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.903060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.903080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.907265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.907468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.907488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.911431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.911613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.911632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.915334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.915521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.915541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.919244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.919432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.919452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.923149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.923349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.923369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.927057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.927258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.927276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.930984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.931167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.931185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.934816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.934999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.923 [2024-11-20 17:21:08.935018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.923 [2024-11-20 17:21:08.938760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.923 [2024-11-20 17:21:08.938947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.924 [2024-11-20 17:21:08.938966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:50.924 [2024-11-20 17:21:08.943779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.924 [2024-11-20 17:21:08.944104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.924 [2024-11-20 17:21:08.944124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:50.924 [2024-11-20 17:21:08.948426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.924 [2024-11-20 17:21:08.948601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.924 [2024-11-20 17:21:08.948621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:50.924 [2024-11-20 17:21:08.952821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.924 [2024-11-20 17:21:08.953114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.924 [2024-11-20 17:21:08.953134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:50.924 [2024-11-20 17:21:08.958191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:50.924 [2024-11-20 17:21:08.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.924 [2024-11-20 17:21:08.958513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.962713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.962891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.962912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.967602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.967809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.967831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.972811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.973018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.973038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.977781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.977973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.978000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.982872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.983065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.988174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.988367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.988385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.993354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.993571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.993591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:08.998827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:08.999057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:08.999076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.004141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.004341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.004361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.009110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.009266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.009284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.014553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.014715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.014733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.019528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.019680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.019697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.024584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.024760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.024778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.029707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.029860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.029880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.034784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.034926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.034944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.039964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.040119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.040138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.045348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.045522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.045541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.050404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.050499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.050517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.055401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.055500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.055518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.060184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.060360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.060380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.065254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.065428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.065447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.070446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.070637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.070657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.075986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.076162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.076180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.081328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.183 [2024-11-20 17:21:09.081510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.183 [2024-11-20 17:21:09.088005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.183 [2024-11-20 17:21:09.088082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.088100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.093927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.094076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.094098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.098402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.098459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.098478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.102358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.102433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.102453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.106310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.106390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.106408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.110395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.110459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.110477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.114238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.114326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.114344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:51.184 [2024-11-20 17:21:09.118541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x195c660) with pdu=0x200016eff3c8 00:26:51.184 [2024-11-20 17:21:09.118611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.184 [2024-11-20 17:21:09.118629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:51.184 6111.50 IOPS, 763.94 MiB/s 00:26:51.184 Latency(us) 00:26:51.184 [2024-11-20T16:21:09.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.184 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:51.184 nvme0n1 : 2.00 6110.37 763.80 0.00 0.00 2614.36 1794.44 10236.10 00:26:51.184 [2024-11-20T16:21:09.227Z] =================================================================================================================== 00:26:51.184 [2024-11-20T16:21:09.227Z] Total : 6110.37 763.80 0.00 0.00 2614.36 1794.44 10236.10 00:26:51.184 { 00:26:51.184 "results": [ 00:26:51.184 { 00:26:51.184 "job": "nvme0n1", 00:26:51.184 "core_mask": "0x2", 00:26:51.184 "workload": "randwrite", 00:26:51.184 "status": "finished", 00:26:51.184 "queue_depth": 16, 00:26:51.184 "io_size": 131072, 00:26:51.184 "runtime": 2.00299, 00:26:51.184 "iops": 6110.3650043185435, 00:26:51.184 "mibps": 763.7956255398179, 00:26:51.184 "io_failed": 0, 00:26:51.184 "io_timeout": 0, 00:26:51.184 "avg_latency_us": 2614.357790513542, 00:26:51.184 "min_latency_us": 1794.4380952380952, 00:26:51.184 "max_latency_us": 10236.099047619047 00:26:51.184 } 00:26:51.184 ], 00:26:51.184 "core_count": 1 00:26:51.184 } 00:26:51.184 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:51.184 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:51.184 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:51.184 | .driver_specific 00:26:51.184 | .nvme_error 00:26:51.184 | .status_code 00:26:51.184 | .command_transient_transport_error' 00:26:51.184 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2644440 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2644440 ']' 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2644440 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644440 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644440' 00:26:51.441 killing process with pid 2644440 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2644440 00:26:51.441 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.441 00:26:51.441 Latency(us) 00:26:51.441 [2024-11-20T16:21:09.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.441 [2024-11-20T16:21:09.484Z] =================================================================================================================== 00:26:51.441 [2024-11-20T16:21:09.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.441 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2644440 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2642303 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2642303 ']' 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2642303 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2642303 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2642303' 00:26:51.699 killing process with pid 2642303 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2642303 00:26:51.699 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2642303 00:26:51.959 00:26:51.959 real 0m13.809s 00:26:51.959 user 0m26.186s 00:26:51.959 sys 0m4.709s 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.959 ************************************ 00:26:51.959 END TEST nvmf_digest_error 00:26:51.959 ************************************ 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.959 rmmod nvme_tcp 00:26:51.959 rmmod nvme_fabrics 00:26:51.959 rmmod nvme_keyring 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2642303 ']' 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2642303 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2642303 ']' 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2642303 00:26:51.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2642303) - No such process 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2642303 is not found' 00:26:51.959 Process with pid 2642303 is not found 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.959 17:21:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.497 00:26:54.497 real 0m36.169s 00:26:54.497 user 0m54.607s 00:26:54.497 sys 0m13.866s 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:54.497 ************************************ 00:26:54.497 END TEST nvmf_digest 00:26:54.497 ************************************ 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.497 17:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.497 ************************************ 00:26:54.497 START TEST nvmf_bdevperf 00:26:54.497 ************************************ 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:54.497 * Looking for test storage... 00:26:54.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.497 --rc genhtml_branch_coverage=1 00:26:54.497 --rc genhtml_function_coverage=1 00:26:54.497 --rc genhtml_legend=1 00:26:54.497 --rc geninfo_all_blocks=1 00:26:54.497 --rc geninfo_unexecuted_blocks=1 00:26:54.497 00:26:54.497 ' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.497 --rc genhtml_branch_coverage=1 00:26:54.497 --rc genhtml_function_coverage=1 00:26:54.497 --rc genhtml_legend=1 00:26:54.497 --rc geninfo_all_blocks=1 00:26:54.497 --rc geninfo_unexecuted_blocks=1 00:26:54.497 00:26:54.497 ' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.497 --rc genhtml_branch_coverage=1 00:26:54.497 --rc genhtml_function_coverage=1 00:26:54.497 --rc genhtml_legend=1 00:26:54.497 --rc geninfo_all_blocks=1 00:26:54.497 --rc geninfo_unexecuted_blocks=1 00:26:54.497 00:26:54.497 ' 00:26:54.497 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.497 --rc genhtml_branch_coverage=1 00:26:54.497 --rc genhtml_function_coverage=1 00:26:54.497 --rc genhtml_legend=1 00:26:54.497 --rc geninfo_all_blocks=1 00:26:54.497 --rc geninfo_unexecuted_blocks=1 00:26:54.497 00:26:54.498 ' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:54.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.498 17:21:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:01.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:01.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:01.123 Found net devices under 0000:86:00.0: cvl_0_0 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.123 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:01.124 Found net devices under 0000:86:00.1: cvl_0_1 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.124 17:21:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:27:01.124 00:27:01.124 --- 10.0.0.2 ping statistics --- 00:27:01.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.124 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:27:01.124 00:27:01.124 --- 10.0.0.1 ping statistics --- 00:27:01.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.124 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2648564 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2648564 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2648564 ']' 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 [2024-11-20 17:21:18.255371] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:01.124 [2024-11-20 17:21:18.255412] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.124 [2024-11-20 17:21:18.334507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:01.124 [2024-11-20 17:21:18.376895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.124 [2024-11-20 17:21:18.376930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.124 [2024-11-20 17:21:18.376937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.124 [2024-11-20 17:21:18.376943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.124 [2024-11-20 17:21:18.376948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.124 [2024-11-20 17:21:18.378395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.124 [2024-11-20 17:21:18.378430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.124 [2024-11-20 17:21:18.378430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 [2024-11-20 17:21:18.515944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 Malloc0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.124 [2024-11-20 17:21:18.589840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:01.124 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:01.125 { 00:27:01.125 "params": { 00:27:01.125 "name": "Nvme$subsystem", 00:27:01.125 "trtype": "$TEST_TRANSPORT", 00:27:01.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.125 "adrfam": "ipv4", 00:27:01.125 "trsvcid": "$NVMF_PORT", 00:27:01.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.125 "hdgst": ${hdgst:-false}, 00:27:01.125 "ddgst": ${ddgst:-false} 00:27:01.125 }, 00:27:01.125 "method": "bdev_nvme_attach_controller" 00:27:01.125 } 00:27:01.125 EOF 00:27:01.125 )") 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:01.125 17:21:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:01.125 "params": { 00:27:01.125 "name": "Nvme1", 00:27:01.125 "trtype": "tcp", 00:27:01.125 "traddr": "10.0.0.2", 00:27:01.125 "adrfam": "ipv4", 00:27:01.125 "trsvcid": "4420", 00:27:01.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:01.125 "hdgst": false, 00:27:01.125 "ddgst": false 00:27:01.125 }, 00:27:01.125 "method": "bdev_nvme_attach_controller" 00:27:01.125 }' 00:27:01.125 [2024-11-20 17:21:18.641665] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:01.125 [2024-11-20 17:21:18.641714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648593 ] 00:27:01.125 [2024-11-20 17:21:18.718962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.125 [2024-11-20 17:21:18.760414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.125 Running I/O for 1 seconds... 00:27:02.056 11392.00 IOPS, 44.50 MiB/s 00:27:02.056 Latency(us) 00:27:02.056 [2024-11-20T16:21:20.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.056 Verification LBA range: start 0x0 length 0x4000 00:27:02.056 Nvme1n1 : 1.01 11445.81 44.71 0.00 0.00 11137.50 670.96 11734.06 00:27:02.056 [2024-11-20T16:21:20.099Z] =================================================================================================================== 00:27:02.056 [2024-11-20T16:21:20.099Z] Total : 11445.81 44.71 0.00 0.00 11137.50 670.96 11734.06 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2648882 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.314 { 00:27:02.314 "params": { 00:27:02.314 "name": "Nvme$subsystem", 00:27:02.314 "trtype": "$TEST_TRANSPORT", 00:27:02.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.314 "adrfam": "ipv4", 00:27:02.314 "trsvcid": "$NVMF_PORT", 00:27:02.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.314 "hdgst": ${hdgst:-false}, 00:27:02.314 "ddgst": ${ddgst:-false} 00:27:02.314 }, 00:27:02.314 "method": "bdev_nvme_attach_controller" 00:27:02.314 } 00:27:02.314 EOF 00:27:02.314 )") 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:02.314 17:21:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:02.314 "params": { 00:27:02.314 "name": "Nvme1", 00:27:02.314 "trtype": "tcp", 00:27:02.314 "traddr": "10.0.0.2", 00:27:02.314 "adrfam": "ipv4", 00:27:02.314 "trsvcid": "4420", 00:27:02.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.314 "hdgst": false, 00:27:02.314 "ddgst": false 00:27:02.314 }, 00:27:02.314 "method": "bdev_nvme_attach_controller" 00:27:02.314 }' 00:27:02.314 [2024-11-20 17:21:20.295214] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:02.314 [2024-11-20 17:21:20.295261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648882 ] 00:27:02.572 [2024-11-20 17:21:20.368103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.572 [2024-11-20 17:21:20.408642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.830 Running I/O for 15 seconds... 00:27:05.134 11236.00 IOPS, 43.89 MiB/s [2024-11-20T16:21:23.438Z] 11327.50 IOPS, 44.25 MiB/s [2024-11-20T16:21:23.438Z] 17:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2648564 00:27:05.395 17:21:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:05.395 [2024-11-20 17:21:23.265690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.395 [2024-11-20 17:21:23.265723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.395 [2024-11-20 17:21:23.265741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.395 [2024-11-20 17:21:23.265750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.395 [2024-11-20 17:21:23.265760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.395 [2024-11-20 17:21:23.265767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.395 [2024-11-20 17:21:23.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.265987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.265996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.396 [2024-11-20 17:21:23.266494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.396 [2024-11-20 17:21:23.266500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.397 [2024-11-20 17:21:23.266985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.397 [2024-11-20 17:21:23.266992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.266998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.398 [2024-11-20 17:21:23.267482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.398 [2024-11-20 17:21:23.267563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.398 [2024-11-20 17:21:23.267569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.399 [2024-11-20 17:21:23.267699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.399 [2024-11-20 17:21:23.267713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.267720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2aae0 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.267729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:05.399 [2024-11-20 17:21:23.267734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:05.399 [2024-11-20 17:21:23.267740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:27:05.399 [2024-11-20 17:21:23.267747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.399 [2024-11-20 17:21:23.270551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.270601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.271184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.271199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.271214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.271388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.271562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.399 [2024-11-20 17:21:23.271574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.399 [2024-11-20 17:21:23.271582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.399 [2024-11-20 17:21:23.271590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.399 [2024-11-20 17:21:23.283790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.284238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.284257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.284264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.284437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.284618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.399 [2024-11-20 17:21:23.284626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.399 [2024-11-20 17:21:23.284634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.399 [2024-11-20 17:21:23.284640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.399 [2024-11-20 17:21:23.296686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.297063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.297130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.297654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.297824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.399 [2024-11-20 17:21:23.297832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.399 [2024-11-20 17:21:23.297838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.399 [2024-11-20 17:21:23.297844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.399 [2024-11-20 17:21:23.309586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.310020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.310037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.310044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.310219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.310388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.399 [2024-11-20 17:21:23.310396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.399 [2024-11-20 17:21:23.310403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.399 [2024-11-20 17:21:23.310413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.399 [2024-11-20 17:21:23.322465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.322802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.322818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.322825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.322993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.323164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.399 [2024-11-20 17:21:23.323173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.399 [2024-11-20 17:21:23.323179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.399 [2024-11-20 17:21:23.323185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.399 [2024-11-20 17:21:23.335309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.399 [2024-11-20 17:21:23.335685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.399 [2024-11-20 17:21:23.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.399 [2024-11-20 17:21:23.335708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.399 [2024-11-20 17:21:23.335876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.399 [2024-11-20 17:21:23.336044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.336052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.336058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.336064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.348189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.348561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.348578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.348585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.348753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.348921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.348929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.348936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.348941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.361070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.361489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.361533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.361556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.362032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.362208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.362216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.362223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.362229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.373955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.374321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.374338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.374345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.374513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.374682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.374690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.374696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.374702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.386830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.387273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.387317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.387340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.387922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.388146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.388153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.388159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.388165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.399885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.400318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.400325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.400501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.400660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.400668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.400674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.400679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.412846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.413257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.413265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.413433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.413602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.413610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.413616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.413622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.400 [2024-11-20 17:21:23.425634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.400 [2024-11-20 17:21:23.426000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.400 [2024-11-20 17:21:23.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.400 [2024-11-20 17:21:23.426021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.400 [2024-11-20 17:21:23.426180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.400 [2024-11-20 17:21:23.426365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.400 [2024-11-20 17:21:23.426373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.400 [2024-11-20 17:21:23.426379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.400 [2024-11-20 17:21:23.426385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.661 [2024-11-20 17:21:23.438590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.661 [2024-11-20 17:21:23.439011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-20 17:21:23.439027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.661 [2024-11-20 17:21:23.439034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.661 [2024-11-20 17:21:23.439210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.661 [2024-11-20 17:21:23.439379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.661 [2024-11-20 17:21:23.439390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.661 [2024-11-20 17:21:23.439397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.661 [2024-11-20 17:21:23.439403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.661 [2024-11-20 17:21:23.451477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.661 [2024-11-20 17:21:23.451874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-20 17:21:23.451890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.661 [2024-11-20 17:21:23.451897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.661 [2024-11-20 17:21:23.452065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.661 [2024-11-20 17:21:23.452240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.661 [2024-11-20 17:21:23.452249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.661 [2024-11-20 17:21:23.452255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.661 [2024-11-20 17:21:23.452262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.661 [2024-11-20 17:21:23.464274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.661 [2024-11-20 17:21:23.464666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-20 17:21:23.464711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.661 [2024-11-20 17:21:23.464735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.661 [2024-11-20 17:21:23.465243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.661 [2024-11-20 17:21:23.465413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.661 [2024-11-20 17:21:23.465421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.661 [2024-11-20 17:21:23.465427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.661 [2024-11-20 17:21:23.465433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.661 [2024-11-20 17:21:23.477131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.661 [2024-11-20 17:21:23.477570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.661 [2024-11-20 17:21:23.477586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.661 [2024-11-20 17:21:23.477593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.661 [2024-11-20 17:21:23.477761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.661 [2024-11-20 17:21:23.477929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.477938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.477944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.477953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.489972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.490365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.490381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.490388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.490546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.490706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.490714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.490720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.490725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.502845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.503263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.503280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.503286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.503445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.503606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.503614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.503620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.503626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.515692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.516147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.516155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.516331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.516505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.516514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.516521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.516527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.528787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.529198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.529231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.529405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.529580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.529588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.529596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.529603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.541832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.542242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.542259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.542267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.542440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.542615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.542623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.542629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.542635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.554783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.555213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.555259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.555282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.555866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.556460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.556481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.556487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.556493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.567765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.568174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.568229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.568253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.568712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.568882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.568890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.568896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.568902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.580633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.581052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.581097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.581121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.581567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.662 [2024-11-20 17:21:23.581736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.662 [2024-11-20 17:21:23.581744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.662 [2024-11-20 17:21:23.581750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.662 [2024-11-20 17:21:23.581757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.662 [2024-11-20 17:21:23.593435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.662 [2024-11-20 17:21:23.593832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.662 [2024-11-20 17:21:23.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.662 [2024-11-20 17:21:23.593899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.662 [2024-11-20 17:21:23.594393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.594563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.594571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.594577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.594583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.606297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.606733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.606740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.606908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.607077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.607088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.607094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.607100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.619026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.619430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.619446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.619453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.619622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.619790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.619798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.619804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.619810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.631836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.632248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.632264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.632272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.632440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.632609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.632617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.632623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.632629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.644640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.645059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.645076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.645083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.645272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.645447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.645455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.645461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.645468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.657487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.657897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.657913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.657920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.658088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.658263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.658272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.658278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.658284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.670371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.670811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.670854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.670878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.671450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.671620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.671628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.671634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.671640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.683239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.683640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.683656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.683663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.663 [2024-11-20 17:21:23.683831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.663 [2024-11-20 17:21:23.684000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.663 [2024-11-20 17:21:23.684009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.663 [2024-11-20 17:21:23.684015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.663 [2024-11-20 17:21:23.684021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.663 [2024-11-20 17:21:23.696136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.663 [2024-11-20 17:21:23.696578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.663 [2024-11-20 17:21:23.696629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.663 [2024-11-20 17:21:23.696653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.664 [2024-11-20 17:21:23.697250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.664 [2024-11-20 17:21:23.697820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.664 [2024-11-20 17:21:23.697828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.664 [2024-11-20 17:21:23.697835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.664 [2024-11-20 17:21:23.697841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 [2024-11-20 17:21:23.709162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.709607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.709623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.924 [2024-11-20 17:21:23.709630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.924 [2024-11-20 17:21:23.709798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.924 [2024-11-20 17:21:23.709966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.924 [2024-11-20 17:21:23.709974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.924 [2024-11-20 17:21:23.709981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.924 [2024-11-20 17:21:23.709986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 [2024-11-20 17:21:23.722060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.722513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.722558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.924 [2024-11-20 17:21:23.722580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.924 [2024-11-20 17:21:23.723013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.924 [2024-11-20 17:21:23.723182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.924 [2024-11-20 17:21:23.723190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.924 [2024-11-20 17:21:23.723196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.924 [2024-11-20 17:21:23.723208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 [2024-11-20 17:21:23.734953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.735407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.735453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.924 [2024-11-20 17:21:23.735476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.924 [2024-11-20 17:21:23.736001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.924 [2024-11-20 17:21:23.736170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.924 [2024-11-20 17:21:23.736179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.924 [2024-11-20 17:21:23.736185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.924 [2024-11-20 17:21:23.736191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 9559.00 IOPS, 37.34 MiB/s [2024-11-20T16:21:23.967Z] [2024-11-20 17:21:23.747776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.748223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.748240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.924 [2024-11-20 17:21:23.748247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.924 [2024-11-20 17:21:23.748414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.924 [2024-11-20 17:21:23.748587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.924 [2024-11-20 17:21:23.748595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.924 [2024-11-20 17:21:23.748602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.924 [2024-11-20 17:21:23.748608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 [2024-11-20 17:21:23.760645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.761045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.761061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.924 [2024-11-20 17:21:23.761068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.924 [2024-11-20 17:21:23.761249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.924 [2024-11-20 17:21:23.761416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.924 [2024-11-20 17:21:23.761424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.924 [2024-11-20 17:21:23.761430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.924 [2024-11-20 17:21:23.761436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.924 [2024-11-20 17:21:23.773439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.924 [2024-11-20 17:21:23.773853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.924 [2024-11-20 17:21:23.773869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.773877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.774050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.774228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.774240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.774247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.774253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.786586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.787020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.787037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.787044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.787222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.787397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.787406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.787412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.787418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.799429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.799876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.799892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.799899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.800066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.800245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.800255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.800261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.800268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.812280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.812706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.812750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.812773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.813317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.813492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.813500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.813506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.813513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.825102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.825537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.825554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.825561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.825729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.825897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.825905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.825911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.825917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.837858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.838193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.838213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.838220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.838403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.838571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.838579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.838585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.838591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.850646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.851087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.851104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.851111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.851289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.851463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.851471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.851477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.851483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.863411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.863741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.863759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.863766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.863924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.864083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.864091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.864097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.864103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.876176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.876581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.876596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.876603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.876761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.876920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.876928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.876934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.876940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.888998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.889416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.925 [2024-11-20 17:21:23.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.925 [2024-11-20 17:21:23.889439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.925 [2024-11-20 17:21:23.889607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.925 [2024-11-20 17:21:23.889774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.925 [2024-11-20 17:21:23.889783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.925 [2024-11-20 17:21:23.889789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.925 [2024-11-20 17:21:23.889795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.925 [2024-11-20 17:21:23.901807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.925 [2024-11-20 17:21:23.902152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-20 17:21:23.902168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.926 [2024-11-20 17:21:23.902175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.926 [2024-11-20 17:21:23.902364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.926 [2024-11-20 17:21:23.902534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.926 [2024-11-20 17:21:23.902543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.926 [2024-11-20 17:21:23.902549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.926 [2024-11-20 17:21:23.902555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.926 [2024-11-20 17:21:23.914550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.926 [2024-11-20 17:21:23.914964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-20 17:21:23.914980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.926 [2024-11-20 17:21:23.914987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.926 [2024-11-20 17:21:23.915146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.926 [2024-11-20 17:21:23.915331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.926 [2024-11-20 17:21:23.915340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.926 [2024-11-20 17:21:23.915346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.926 [2024-11-20 17:21:23.915352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.926 [2024-11-20 17:21:23.927271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.926 [2024-11-20 17:21:23.927684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-20 17:21:23.927700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.926 [2024-11-20 17:21:23.927706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.926 [2024-11-20 17:21:23.927865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.926 [2024-11-20 17:21:23.928024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.926 [2024-11-20 17:21:23.928032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.926 [2024-11-20 17:21:23.928038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.926 [2024-11-20 17:21:23.928043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.926 [2024-11-20 17:21:23.940064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.926 [2024-11-20 17:21:23.940493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-20 17:21:23.940509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.926 [2024-11-20 17:21:23.940517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.926 [2024-11-20 17:21:23.940685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.926 [2024-11-20 17:21:23.940853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.926 [2024-11-20 17:21:23.940862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.926 [2024-11-20 17:21:23.940871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.926 [2024-11-20 17:21:23.940877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.926 [2024-11-20 17:21:23.952987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.926 [2024-11-20 17:21:23.953410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.926 [2024-11-20 17:21:23.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:05.926 [2024-11-20 17:21:23.953433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:05.926 [2024-11-20 17:21:23.953591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:05.926 [2024-11-20 17:21:23.953750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.926 [2024-11-20 17:21:23.953758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.926 [2024-11-20 17:21:23.953763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.926 [2024-11-20 17:21:23.953769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:23.965993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:23.966437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:23.966453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:23.966460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:23.966627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:23.966795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:23.966803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:23.966809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:23.966815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:23.978849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:23.979267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:23.979282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:23.979289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:23.979447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:23.979606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:23.979614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:23.979620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:23.979625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:23.991699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:23.992114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:23.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:23.992136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:23.992320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:23.992489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:23.992497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:23.992503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:23.992509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:24.004553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:24.004971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:24.004987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:24.004993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:24.005152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:24.005336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:24.005345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:24.005351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:24.005357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:24.017363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:24.017781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:24.017797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:24.017803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:24.017963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:24.018122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:24.018130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:24.018136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:24.018142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:24.030200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:24.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:24.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:24.030636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:24.030804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.186 [2024-11-20 17:21:24.030972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.186 [2024-11-20 17:21:24.030980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.186 [2024-11-20 17:21:24.030986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.186 [2024-11-20 17:21:24.030992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.186 [2024-11-20 17:21:24.043316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.186 [2024-11-20 17:21:24.043719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.186 [2024-11-20 17:21:24.043735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.186 [2024-11-20 17:21:24.043742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.186 [2024-11-20 17:21:24.043916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.044089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.044098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.044105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.044111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.056084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.056523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.056530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.056698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.056867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.056875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.056882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.056888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.068894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.069326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.069371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.069395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.069822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.069984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.069992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.069998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.070004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.081625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.082039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.082055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.082061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.082241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.082410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.082419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.082425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.082430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.094484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.094903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.094947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.094970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.095461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.095631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.095639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.095646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.095652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.107285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.107679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.107695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.107701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.107859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.108017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.108025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.108035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.108041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.120126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.120489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.120505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.120513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.120681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.120854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.120862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.120868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.120874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.132972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.133404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.133452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.133476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.134055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.134229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.134238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.134244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.134250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.145755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.146172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.146188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.146194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.146382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.146551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.146559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.146566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.146571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.158635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.159070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.159115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.159138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.187 [2024-11-20 17:21:24.159666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.187 [2024-11-20 17:21:24.159835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.187 [2024-11-20 17:21:24.159843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.187 [2024-11-20 17:21:24.159849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.187 [2024-11-20 17:21:24.159855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.187 [2024-11-20 17:21:24.171586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.187 [2024-11-20 17:21:24.171932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.187 [2024-11-20 17:21:24.171949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.187 [2024-11-20 17:21:24.171956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.188 [2024-11-20 17:21:24.172123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.188 [2024-11-20 17:21:24.172295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.188 [2024-11-20 17:21:24.172305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.188 [2024-11-20 17:21:24.172311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.188 [2024-11-20 17:21:24.172317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.188 [2024-11-20 17:21:24.184449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.188 [2024-11-20 17:21:24.184886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.188 [2024-11-20 17:21:24.184902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.188 [2024-11-20 17:21:24.184909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.188 [2024-11-20 17:21:24.185076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.188 [2024-11-20 17:21:24.185249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.188 [2024-11-20 17:21:24.185259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.188 [2024-11-20 17:21:24.185265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.188 [2024-11-20 17:21:24.185271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.188 [2024-11-20 17:21:24.197206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.188 [2024-11-20 17:21:24.197569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.188 [2024-11-20 17:21:24.197585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.188 [2024-11-20 17:21:24.197596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.188 [2024-11-20 17:21:24.197764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.188 [2024-11-20 17:21:24.197935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.188 [2024-11-20 17:21:24.197943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.188 [2024-11-20 17:21:24.197950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.188 [2024-11-20 17:21:24.197957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.188 [2024-11-20 17:21:24.209957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.188 [2024-11-20 17:21:24.210395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.188 [2024-11-20 17:21:24.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.188 [2024-11-20 17:21:24.210465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.188 [2024-11-20 17:21:24.211030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.188 [2024-11-20 17:21:24.211199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.188 [2024-11-20 17:21:24.211214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.188 [2024-11-20 17:21:24.211221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.188 [2024-11-20 17:21:24.211228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.188 [2024-11-20 17:21:24.223007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.188 [2024-11-20 17:21:24.223364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.188 [2024-11-20 17:21:24.223381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.188 [2024-11-20 17:21:24.223389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.188 [2024-11-20 17:21:24.223561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.188 [2024-11-20 17:21:24.223739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.188 [2024-11-20 17:21:24.223747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.188 [2024-11-20 17:21:24.223753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.188 [2024-11-20 17:21:24.223760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.448 [2024-11-20 17:21:24.235951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.448 [2024-11-20 17:21:24.236390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.448 [2024-11-20 17:21:24.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.448 [2024-11-20 17:21:24.236413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.448 [2024-11-20 17:21:24.236582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.448 [2024-11-20 17:21:24.236754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.448 [2024-11-20 17:21:24.236763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.448 [2024-11-20 17:21:24.236769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.448 [2024-11-20 17:21:24.236774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.448 [2024-11-20 17:21:24.248805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.448 [2024-11-20 17:21:24.249264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.448 [2024-11-20 17:21:24.249308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.448 [2024-11-20 17:21:24.249332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.448 [2024-11-20 17:21:24.249914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.448 [2024-11-20 17:21:24.250138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.448 [2024-11-20 17:21:24.250146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.448 [2024-11-20 17:21:24.250153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.448 [2024-11-20 17:21:24.250160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.448 [2024-11-20 17:21:24.261643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.448 [2024-11-20 17:21:24.261997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.262013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.262020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.262188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.262381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.262397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.262404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.262410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.274406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.274826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.274841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.274847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.275006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.275166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.275174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.275183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.275189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.287334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.287789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.287797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.287969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.288142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.288151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.288158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.288164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.300482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.300918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.300934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.300942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.301115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.301292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.301301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.301308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.301314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.313282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.313608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.313623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.313630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.313790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.313948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.313956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.313961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.313967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.326070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.326511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.326534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.326701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.326869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.326877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.326883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.326889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.338818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.339238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.339254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.339261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.339420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.339579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.339587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.339593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.339599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.351612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.352049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.352085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.352110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.352708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.353303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.353330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.353350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.353369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.364361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.364754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.364769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.364781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.364940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.365100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.365108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.365114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.365119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.377178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.377611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.449 [2024-11-20 17:21:24.377643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.449 [2024-11-20 17:21:24.377668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.449 [2024-11-20 17:21:24.378274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.449 [2024-11-20 17:21:24.378444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.449 [2024-11-20 17:21:24.378452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.449 [2024-11-20 17:21:24.378458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.449 [2024-11-20 17:21:24.378465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.449 [2024-11-20 17:21:24.390053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.449 [2024-11-20 17:21:24.390502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.390547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.390570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.391154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.391637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.391645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.391652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.391657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.402915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.403223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.403240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.403247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.403416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.403587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.403595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.403601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.403607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.415733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.416121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.416170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.416193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.416726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.416895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.416903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.416909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.416915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.428501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.428918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.428934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.428941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.429109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.429284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.429293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.429299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.429305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.441393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.441750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.441766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.441774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.441941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.442110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.442118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.442129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.442136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.454340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.454719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.454763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.454786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.455347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.455517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.455527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.455535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.455542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.469184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.469550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.469572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.469582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.469835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.470090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.470102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.470111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.470120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.450 [2024-11-20 17:21:24.482102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.450 [2024-11-20 17:21:24.482476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.450 [2024-11-20 17:21:24.482492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.450 [2024-11-20 17:21:24.482499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.450 [2024-11-20 17:21:24.482672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.450 [2024-11-20 17:21:24.482845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.450 [2024-11-20 17:21:24.482854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.450 [2024-11-20 17:21:24.482860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.450 [2024-11-20 17:21:24.482866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.711 [2024-11-20 17:21:24.495154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.711 [2024-11-20 17:21:24.495458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.711 [2024-11-20 17:21:24.495501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.711 [2024-11-20 17:21:24.495525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.711 [2024-11-20 17:21:24.496105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.711 [2024-11-20 17:21:24.496703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.711 [2024-11-20 17:21:24.496729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.711 [2024-11-20 17:21:24.496749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.711 [2024-11-20 17:21:24.496768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.711 [2024-11-20 17:21:24.509978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.711 [2024-11-20 17:21:24.510433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.711 [2024-11-20 17:21:24.510454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.711 [2024-11-20 17:21:24.510463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.711 [2024-11-20 17:21:24.510697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.711 [2024-11-20 17:21:24.510933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.711 [2024-11-20 17:21:24.510944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.711 [2024-11-20 17:21:24.510953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.711 [2024-11-20 17:21:24.510962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.711 [2024-11-20 17:21:24.522939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.711 [2024-11-20 17:21:24.523318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.711 [2024-11-20 17:21:24.523364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.711 [2024-11-20 17:21:24.523387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.711 [2024-11-20 17:21:24.523873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.711 [2024-11-20 17:21:24.524042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.711 [2024-11-20 17:21:24.524050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.711 [2024-11-20 17:21:24.524056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.711 [2024-11-20 17:21:24.524063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.711 [2024-11-20 17:21:24.536035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.711 [2024-11-20 17:21:24.536315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.711 [2024-11-20 17:21:24.536331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.711 [2024-11-20 17:21:24.536341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.711 [2024-11-20 17:21:24.536514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.711 [2024-11-20 17:21:24.536694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.711 [2024-11-20 17:21:24.536702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.711 [2024-11-20 17:21:24.536709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.711 [2024-11-20 17:21:24.536715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.711 [2024-11-20 17:21:24.549111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.711 [2024-11-20 17:21:24.549468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.711 [2024-11-20 17:21:24.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.711 [2024-11-20 17:21:24.549494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.711 [2024-11-20 17:21:24.549688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.711 [2024-11-20 17:21:24.549861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.549869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.549876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.549882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.562424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.562683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.562699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.562706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.562890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.563075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.563084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.563108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.563116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.575697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.576099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.576116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.576124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.576313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.576498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.576510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.576517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.576524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.588991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.589442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.589459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.589467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.589651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.589836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.589845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.589852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.589859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.602134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.602591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.602608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.602615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.602788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.602961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.602970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.602976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.602982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.615205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.615611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.615628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.615635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.615808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.615981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.615990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.615996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.616007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.628245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.628595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.628612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.628619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.628792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.628965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.628974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.628980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.628986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.641291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.641724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.641740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.641747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.641920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.642092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.642101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.642107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.642113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.654354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.654686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.712 [2024-11-20 17:21:24.654702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.712 [2024-11-20 17:21:24.654709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.712 [2024-11-20 17:21:24.654882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.712 [2024-11-20 17:21:24.655054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.712 [2024-11-20 17:21:24.655063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.712 [2024-11-20 17:21:24.655069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.712 [2024-11-20 17:21:24.655075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.712 [2024-11-20 17:21:24.667602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.712 [2024-11-20 17:21:24.667943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.667959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.667967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.668151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.668342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.668351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.668358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.668364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 [2024-11-20 17:21:24.680804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.681258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.681276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.681283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.681467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.681652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.681661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.681667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.681674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 [2024-11-20 17:21:24.694023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.694356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.694373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.694381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.694564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.694749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.694758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.694765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.694771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 [2024-11-20 17:21:24.707292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.707686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.707703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.707711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.707910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.708106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.708116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.708123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.708130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 [2024-11-20 17:21:24.720657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.721105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.721121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.721129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.721338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.721536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.721545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.721553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.721559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 [2024-11-20 17:21:24.733800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.734248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.734266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.734273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.734457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.734641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.734650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.734656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.734663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.713 7169.25 IOPS, 28.00 MiB/s [2024-11-20T16:21:24.756Z] [2024-11-20 17:21:24.748455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.713 [2024-11-20 17:21:24.748926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.713 [2024-11-20 17:21:24.748944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.713 [2024-11-20 17:21:24.748952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.713 [2024-11-20 17:21:24.749148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.713 [2024-11-20 17:21:24.749353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.713 [2024-11-20 17:21:24.749366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.713 [2024-11-20 17:21:24.749373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.713 [2024-11-20 17:21:24.749380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.761752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.762172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.762190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.762198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.762387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.762570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.762579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.762586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.762592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.774865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.775319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.775337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.775344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.775528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.775713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.775721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.775729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.775736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.788116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.788547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.788564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.788572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.788756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.788944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.788953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.788960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.788971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.801540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.802002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.802020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.802029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.802237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.802435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.802444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.802452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.802460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.814779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.815200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.815223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.815230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.815414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.815600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.815609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.815616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.815622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.827848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.828268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.828313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.975 [2024-11-20 17:21:24.828336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.975 [2024-11-20 17:21:24.828919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.975 [2024-11-20 17:21:24.829521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.975 [2024-11-20 17:21:24.829548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.975 [2024-11-20 17:21:24.829569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.975 [2024-11-20 17:21:24.829588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.975 [2024-11-20 17:21:24.840862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.975 [2024-11-20 17:21:24.841272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.975 [2024-11-20 17:21:24.841288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.841295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.841468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.841642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.841650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.841656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.841663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.853823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.854262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.854270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.854451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.854619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.854628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.854634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.854640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.866587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.867010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.867026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.867033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.867207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.867376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.867384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.867390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.867396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.879350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.879776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.879820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.879843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.880431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.880603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.880611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.880617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.880623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.894219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.894730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.894774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.894797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.895395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.895686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.895697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.895707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.895715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.907226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.907650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.907818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.907985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.907993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.907999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.908005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.919965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.920368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.920435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.920990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.921391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.921414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.921429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.921442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.934843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.935268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.935319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.935342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.935890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.936145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.936157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.936166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.976 [2024-11-20 17:21:24.936175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.976 [2024-11-20 17:21:24.947810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.976 [2024-11-20 17:21:24.948231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.976 [2024-11-20 17:21:24.948276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.976 [2024-11-20 17:21:24.948299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.976 [2024-11-20 17:21:24.948833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.976 [2024-11-20 17:21:24.949002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.976 [2024-11-20 17:21:24.949010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.976 [2024-11-20 17:21:24.949016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:24.949022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.977 [2024-11-20 17:21:24.960553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.977 [2024-11-20 17:21:24.960968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.977 [2024-11-20 17:21:24.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.977 [2024-11-20 17:21:24.960991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.977 [2024-11-20 17:21:24.961159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.977 [2024-11-20 17:21:24.961335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.977 [2024-11-20 17:21:24.961343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.977 [2024-11-20 17:21:24.961350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:24.961359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.977 [2024-11-20 17:21:24.973286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.977 [2024-11-20 17:21:24.973679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.977 [2024-11-20 17:21:24.973694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.977 [2024-11-20 17:21:24.973701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.977 [2024-11-20 17:21:24.973859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.977 [2024-11-20 17:21:24.974018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.977 [2024-11-20 17:21:24.974026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.977 [2024-11-20 17:21:24.974031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:24.974037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.977 [2024-11-20 17:21:24.986036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.977 [2024-11-20 17:21:24.986451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.977 [2024-11-20 17:21:24.986467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.977 [2024-11-20 17:21:24.986474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.977 [2024-11-20 17:21:24.986641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.977 [2024-11-20 17:21:24.986809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.977 [2024-11-20 17:21:24.986817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.977 [2024-11-20 17:21:24.986823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:24.986829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.977 [2024-11-20 17:21:24.998784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.977 [2024-11-20 17:21:24.999182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.977 [2024-11-20 17:21:24.999197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.977 [2024-11-20 17:21:24.999209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.977 [2024-11-20 17:21:24.999392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.977 [2024-11-20 17:21:24.999564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.977 [2024-11-20 17:21:24.999572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.977 [2024-11-20 17:21:24.999578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:24.999584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.977 [2024-11-20 17:21:25.011784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.977 [2024-11-20 17:21:25.012164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.977 [2024-11-20 17:21:25.012186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:06.977 [2024-11-20 17:21:25.012193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:06.977 [2024-11-20 17:21:25.012371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:06.977 [2024-11-20 17:21:25.012544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.977 [2024-11-20 17:21:25.012553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.977 [2024-11-20 17:21:25.012559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.977 [2024-11-20 17:21:25.012565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.024602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.024993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.025009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.025015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.025174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.025360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.025369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.025375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.025381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.037381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.037800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.037817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.037824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.037992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.038159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.038167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.038173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.038179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.050113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.050525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.050541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.050548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.050720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.050893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.050901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.050907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.050913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.063081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.063520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.063537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.063544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.063717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.063890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.063898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.063905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.063912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.076216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.076625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.076641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.076649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.076823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.077000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.077008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.077015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.077021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.089011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.237 [2024-11-20 17:21:25.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.237 [2024-11-20 17:21:25.089447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.237 [2024-11-20 17:21:25.089453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.237 [2024-11-20 17:21:25.089621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.237 [2024-11-20 17:21:25.089789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.237 [2024-11-20 17:21:25.089800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.237 [2024-11-20 17:21:25.089806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.237 [2024-11-20 17:21:25.089812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.237 [2024-11-20 17:21:25.101770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.102214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.102222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.102390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.102559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.102567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.102573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.102579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.114570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.114988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.115003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.115010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.115178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.115356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.115365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.115371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.115377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.127519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.127942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.127990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.128014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.128482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.128652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.128660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.128666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.128672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.140384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.140728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.140745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.140752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.140920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.141088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.141096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.141102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.141109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.153250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.153687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.153721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.153745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.154312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.154482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.154490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.154497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.154503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.166059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.166483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.166500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.166507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.166675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.166843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.166851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.166858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.166864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.178865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.179282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.179302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.179309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.179477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.179645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.179653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.179659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.179665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.191627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.191998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.192014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.192021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.192180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.192367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.192376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.192382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.192388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.204497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.204883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.204905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.205063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.205243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.205251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.205258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.238 [2024-11-20 17:21:25.205264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.238 [2024-11-20 17:21:25.217295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.238 [2024-11-20 17:21:25.217682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.238 [2024-11-20 17:21:25.217698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.238 [2024-11-20 17:21:25.217704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.238 [2024-11-20 17:21:25.217866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.238 [2024-11-20 17:21:25.218026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.238 [2024-11-20 17:21:25.218034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.238 [2024-11-20 17:21:25.218039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.239 [2024-11-20 17:21:25.218045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.239 [2024-11-20 17:21:25.230040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.239 [2024-11-20 17:21:25.230437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-11-20 17:21:25.230481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.239 [2024-11-20 17:21:25.230504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.239 [2024-11-20 17:21:25.231085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.239 [2024-11-20 17:21:25.231575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.239 [2024-11-20 17:21:25.231583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.239 [2024-11-20 17:21:25.231589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.239 [2024-11-20 17:21:25.231595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.239 [2024-11-20 17:21:25.242882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.239 [2024-11-20 17:21:25.243264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-11-20 17:21:25.243309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.239 [2024-11-20 17:21:25.243332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.239 [2024-11-20 17:21:25.243830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.239 [2024-11-20 17:21:25.243999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.239 [2024-11-20 17:21:25.244008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.239 [2024-11-20 17:21:25.244014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.239 [2024-11-20 17:21:25.244021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.239 [2024-11-20 17:21:25.255659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.239 [2024-11-20 17:21:25.256094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-11-20 17:21:25.256110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.239 [2024-11-20 17:21:25.256117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.239 [2024-11-20 17:21:25.256292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.239 [2024-11-20 17:21:25.256461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.239 [2024-11-20 17:21:25.256470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.239 [2024-11-20 17:21:25.256479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.239 [2024-11-20 17:21:25.256486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.239 [2024-11-20 17:21:25.268451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.239 [2024-11-20 17:21:25.268886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.239 [2024-11-20 17:21:25.268903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.239 [2024-11-20 17:21:25.268910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.239 [2024-11-20 17:21:25.269078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.239 [2024-11-20 17:21:25.269253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.239 [2024-11-20 17:21:25.269278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.239 [2024-11-20 17:21:25.269285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.239 [2024-11-20 17:21:25.269292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.498 [2024-11-20 17:21:25.281339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.498 [2024-11-20 17:21:25.281766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.498 [2024-11-20 17:21:25.281782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.498 [2024-11-20 17:21:25.281789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.281962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.282135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.282144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.282151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.282158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.294190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.294607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.294623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.294629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.294788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.294947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.294955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.294961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.294967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.306929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.307295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.307311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.307318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.307486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.307655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.307663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.307669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.307675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.319934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.320384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.320401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.320408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.320581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.320753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.320762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.320769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.320775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.333093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.333497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.333514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.333521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.333694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.333868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.333877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.333883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.333889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.346132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.346565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.346585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.346592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.346761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.346929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.346937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.346944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.346949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.359073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.359504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.359547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.359571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.360063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.360241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.360250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.360256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.360262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.371971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.372376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.372423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.372446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.372872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.373032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.373040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.373045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.373051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.499 [2024-11-20 17:21:25.384730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.499 [2024-11-20 17:21:25.385120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.499 [2024-11-20 17:21:25.385135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.499 [2024-11-20 17:21:25.385142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.499 [2024-11-20 17:21:25.385326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.499 [2024-11-20 17:21:25.385498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.499 [2024-11-20 17:21:25.385506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.499 [2024-11-20 17:21:25.385512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.499 [2024-11-20 17:21:25.385518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.397511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.397923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.397939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.397946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.398113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.398287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.398296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.398302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.398308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.410354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.410680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.410701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.410860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.411020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.411027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.411033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.411039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.423187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.423563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.423607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.423631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.424095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.424279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.424288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.424297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.424303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.435933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.436351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.436367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.436374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.436541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.436709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.436718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.436724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.436730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.448929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.449342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.449369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.449377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.449544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.449712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.449720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.449727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.449733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.461791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.462172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.462187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.462193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.462381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.462550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.462558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.462564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.462571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.474567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.474958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.474973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.474979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.475138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.475324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.475332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.475339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.475345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.500 [2024-11-20 17:21:25.487319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.500 [2024-11-20 17:21:25.487684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.500 [2024-11-20 17:21:25.487699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.500 [2024-11-20 17:21:25.487706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.500 [2024-11-20 17:21:25.487864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.500 [2024-11-20 17:21:25.488022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.500 [2024-11-20 17:21:25.488030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.500 [2024-11-20 17:21:25.488036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.500 [2024-11-20 17:21:25.488042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.501 [2024-11-20 17:21:25.500047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.501 [2024-11-20 17:21:25.500455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.501 [2024-11-20 17:21:25.500472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.501 [2024-11-20 17:21:25.500479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.501 [2024-11-20 17:21:25.500646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.501 [2024-11-20 17:21:25.500815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.501 [2024-11-20 17:21:25.500823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.501 [2024-11-20 17:21:25.500829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.501 [2024-11-20 17:21:25.500835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.501 [2024-11-20 17:21:25.512907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.501 [2024-11-20 17:21:25.513287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.501 [2024-11-20 17:21:25.513303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.501 [2024-11-20 17:21:25.513312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.501 [2024-11-20 17:21:25.513472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.501 [2024-11-20 17:21:25.513631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.501 [2024-11-20 17:21:25.513639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.501 [2024-11-20 17:21:25.513645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.501 [2024-11-20 17:21:25.513651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.501 [2024-11-20 17:21:25.525653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.501 [2024-11-20 17:21:25.526045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.501 [2024-11-20 17:21:25.526061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.501 [2024-11-20 17:21:25.526067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.501 [2024-11-20 17:21:25.526248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.501 [2024-11-20 17:21:25.526417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.501 [2024-11-20 17:21:25.526425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.501 [2024-11-20 17:21:25.526431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.501 [2024-11-20 17:21:25.526437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.538661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.538992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.539008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.539015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.761 [2024-11-20 17:21:25.539188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.761 [2024-11-20 17:21:25.539368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.761 [2024-11-20 17:21:25.539378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.761 [2024-11-20 17:21:25.539384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.761 [2024-11-20 17:21:25.539390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.551424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.551791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.551806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.551813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.761 [2024-11-20 17:21:25.551971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.761 [2024-11-20 17:21:25.552133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.761 [2024-11-20 17:21:25.552141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.761 [2024-11-20 17:21:25.552147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.761 [2024-11-20 17:21:25.552152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.564180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.564606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.564624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.564631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.761 [2024-11-20 17:21:25.564799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.761 [2024-11-20 17:21:25.564966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.761 [2024-11-20 17:21:25.564975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.761 [2024-11-20 17:21:25.564981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.761 [2024-11-20 17:21:25.564987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.576989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.577351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.577368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.577375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.761 [2024-11-20 17:21:25.577548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.761 [2024-11-20 17:21:25.577722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.761 [2024-11-20 17:21:25.577731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.761 [2024-11-20 17:21:25.577737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.761 [2024-11-20 17:21:25.577744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.590052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.590423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.590439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.590447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.761 [2024-11-20 17:21:25.590620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.761 [2024-11-20 17:21:25.590794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.761 [2024-11-20 17:21:25.590802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.761 [2024-11-20 17:21:25.590813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.761 [2024-11-20 17:21:25.590820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.761 [2024-11-20 17:21:25.602981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.761 [2024-11-20 17:21:25.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.761 [2024-11-20 17:21:25.603435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.761 [2024-11-20 17:21:25.603458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.604043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.604554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.604563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.604569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.604575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.615927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.616264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.616281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.616288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.616462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.616636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.616644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.616650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.616657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.628705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.629127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.629171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.629193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.629608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.629782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.629790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.629796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.629802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.641600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.642025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.642069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.642091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.642598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.642766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.642774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.642781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.642787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.654335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.654723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.654739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.654746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.654904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.655063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.655071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.655076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.655082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.667086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.667473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.667490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.667497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.667664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.667832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.667840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.667846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.667852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.679920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.680287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.680303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.680312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.680471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.680630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.680638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.680644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.680649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.692654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.693039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.693054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.693061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.693242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.693411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.693419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.693425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.693431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.705399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.705824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.705869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.705891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.706428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.706599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.706607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.706613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.706619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.718224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.718627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.718644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.718651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.762 [2024-11-20 17:21:25.718819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.762 [2024-11-20 17:21:25.718990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.762 [2024-11-20 17:21:25.718999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.762 [2024-11-20 17:21:25.719005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.762 [2024-11-20 17:21:25.719011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.762 [2024-11-20 17:21:25.731006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.762 [2024-11-20 17:21:25.731419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.762 [2024-11-20 17:21:25.731436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.762 [2024-11-20 17:21:25.731443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.731611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.731779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.731787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.731793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.731800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.763 [2024-11-20 17:21:25.743907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.763 [2024-11-20 17:21:25.744322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.763 [2024-11-20 17:21:25.744338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.763 [2024-11-20 17:21:25.744345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.744512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.744681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.744689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.744695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.744701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.763 5735.40 IOPS, 22.40 MiB/s [2024-11-20T16:21:25.806Z] [2024-11-20 17:21:25.756696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.763 [2024-11-20 17:21:25.757088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.763 [2024-11-20 17:21:25.757104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.763 [2024-11-20 17:21:25.757111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.757294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.757462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.757470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.757480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.757486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.763 [2024-11-20 17:21:25.769571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.763 [2024-11-20 17:21:25.769971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.763 [2024-11-20 17:21:25.769987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.763 [2024-11-20 17:21:25.769995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.770162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.770337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.770346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.770352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.770358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.763 [2024-11-20 17:21:25.782399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.763 [2024-11-20 17:21:25.782823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.763 [2024-11-20 17:21:25.782839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.763 [2024-11-20 17:21:25.782846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.783014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.783182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.783190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.783196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.783208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:07.763 [2024-11-20 17:21:25.795267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:07.763 [2024-11-20 17:21:25.795674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.763 [2024-11-20 17:21:25.795689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:07.763 [2024-11-20 17:21:25.795697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:07.763 [2024-11-20 17:21:25.795869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:07.763 [2024-11-20 17:21:25.796041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:07.763 [2024-11-20 17:21:25.796050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:07.763 [2024-11-20 17:21:25.796056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:07.763 [2024-11-20 17:21:25.796062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.023 [2024-11-20 17:21:25.808322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.023 [2024-11-20 17:21:25.808741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-11-20 17:21:25.808757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.023 [2024-11-20 17:21:25.808765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.023 [2024-11-20 17:21:25.808933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.023 [2024-11-20 17:21:25.809101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.809109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.809116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.809122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.821216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.821578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.821594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.821601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.821769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.821938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.821946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.821952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.821959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.834115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.834484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.834501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.834508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.834682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.834854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.834863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.834870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.834876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.847168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.847526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.847542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.847553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.847727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.847902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.847910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.847916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.847923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.860072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.860577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.860621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.860644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.861126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.861303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.861311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.861318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.861324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.872939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.873232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.873249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.873256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.873425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.873593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.873602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.873608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.873614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.885782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.886171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.886186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.886193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.886384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.886562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.886571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.886578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.886584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.898833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.899238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.899255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.899262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.899435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.899607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.899615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.899622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.899628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.911719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.912096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.912113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.912120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.912295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.912465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.912473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.912479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.024 [2024-11-20 17:21:25.912485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.024 [2024-11-20 17:21:25.924515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.024 [2024-11-20 17:21:25.924918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-11-20 17:21:25.924961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.024 [2024-11-20 17:21:25.924984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.024 [2024-11-20 17:21:25.925476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.024 [2024-11-20 17:21:25.925667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.024 [2024-11-20 17:21:25.925675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.024 [2024-11-20 17:21:25.925685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.925691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:25.937517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:25.937888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:25.937904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:25.937911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:25.938080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:25.938255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:25.938264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:25.938272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.938278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:25.950381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:25.950772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:25.950816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:25.950839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:25.951331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:25.951500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:25.951508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:25.951514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.951521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:25.963277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:25.963638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:25.963654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:25.963661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:25.963828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:25.963997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:25.964005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:25.964011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.964018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:25.976219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:25.976572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:25.976588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:25.976595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:25.976767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:25.976941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:25.976949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:25.976956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.976961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:25.989044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:25.989421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:25.989437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:25.989444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:25.989612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:25.989780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:25.989789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:25.989795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:25.989800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:26.001960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:26.002290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:26.002308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:26.002315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:26.002483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:26.002650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:26.002659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:26.002665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:26.002671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:26.014879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:26.015158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:26.015174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:26.015184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:26.015358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:26.015526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:26.015534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:26.015540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:26.015547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:26.027700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:26.028042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:26.028058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:26.028064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:26.028239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:26.028407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:26.028415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:26.028422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:26.028427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:26.040571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:26.040862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-11-20 17:21:26.040878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.025 [2024-11-20 17:21:26.040885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.025 [2024-11-20 17:21:26.041052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.025 [2024-11-20 17:21:26.041229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.025 [2024-11-20 17:21:26.041238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.025 [2024-11-20 17:21:26.041244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.025 [2024-11-20 17:21:26.041250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.025 [2024-11-20 17:21:26.053485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.025 [2024-11-20 17:21:26.053831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-11-20 17:21:26.053847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.026 [2024-11-20 17:21:26.053853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.026 [2024-11-20 17:21:26.054021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.026 [2024-11-20 17:21:26.054189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.026 [2024-11-20 17:21:26.054200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.026 [2024-11-20 17:21:26.054214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.026 [2024-11-20 17:21:26.054220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.286 [2024-11-20 17:21:26.066523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.286 [2024-11-20 17:21:26.066879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-11-20 17:21:26.066896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.286 [2024-11-20 17:21:26.066903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.286 [2024-11-20 17:21:26.067087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.286 [2024-11-20 17:21:26.067265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.286 [2024-11-20 17:21:26.067274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.286 [2024-11-20 17:21:26.067280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.286 [2024-11-20 17:21:26.067286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.286 [2024-11-20 17:21:26.079447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.286 [2024-11-20 17:21:26.079809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.286 [2024-11-20 17:21:26.079824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.286 [2024-11-20 17:21:26.079831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.286 [2024-11-20 17:21:26.079999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.286 [2024-11-20 17:21:26.080168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.286 [2024-11-20 17:21:26.080176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.286 [2024-11-20 17:21:26.080183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.080189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.092312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.092617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.092624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.092797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.092971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.092980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.092986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.092996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.105438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.105818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.105834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.105841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.106014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.106187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.106196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.106208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.106214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.118333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.118714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.118758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.118781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.119380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.119766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.119775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.119781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.119787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.131285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.131639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.131646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.131814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.131984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.131992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.131998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.132005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.144134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.144434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.144450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.144458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.144626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.144794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.144802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.144808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.144814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.156947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.157314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.157330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.157338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.157505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.157674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.157682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.157688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.157694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.169707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.170056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.170072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.170079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.170252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.170421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.170430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.170436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.170442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.182596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.182947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.182963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.182970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.183140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.183315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.183324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.183330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.183336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.195474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.195751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.195768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.195775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.195943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.196112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.287 [2024-11-20 17:21:26.196120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.287 [2024-11-20 17:21:26.196126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.287 [2024-11-20 17:21:26.196132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.287 [2024-11-20 17:21:26.208368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.287 [2024-11-20 17:21:26.208780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.287 [2024-11-20 17:21:26.208795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.287 [2024-11-20 17:21:26.208802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.287 [2024-11-20 17:21:26.208961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.287 [2024-11-20 17:21:26.209120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.209128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.209134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.209139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.221232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.221497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.221512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.221519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.221688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.221856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.221867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.221873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.221879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.234037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.234332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.234355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.234533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.234693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.234701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.234707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.234712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.246908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.247271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.247287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.247295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.247462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.247630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.247639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.247645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.247651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.259878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.260247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.260264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.260271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.260455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.260628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.260636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.260643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2648564 Killed "${NVMF_APP[@]}" "$@" 00:27:08.288 [2024-11-20 17:21:26.260655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2649966 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2649966 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2649966 ']' 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.288 17:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:08.288 [2024-11-20 17:21:26.272891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.273247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.273263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.273271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.273444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.273619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.273628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.273634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.273641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.285854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.286285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.286301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.286308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.286481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.286655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.286663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.286673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.286679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.298902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.299337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.299353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.299361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.299533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.299707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.299715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.299721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.299728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.311900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.288 [2024-11-20 17:21:26.312349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.288 [2024-11-20 17:21:26.312366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.288 [2024-11-20 17:21:26.312373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.288 [2024-11-20 17:21:26.312555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.288 [2024-11-20 17:21:26.312724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.288 [2024-11-20 17:21:26.312733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.288 [2024-11-20 17:21:26.312739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.288 [2024-11-20 17:21:26.312745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.288 [2024-11-20 17:21:26.318668] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:08.288 [2024-11-20 17:21:26.318707] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.548 [2024-11-20 17:21:26.324937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.548 [2024-11-20 17:21:26.325365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-11-20 17:21:26.325382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.548 [2024-11-20 17:21:26.325390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.548 [2024-11-20 17:21:26.325564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.548 [2024-11-20 17:21:26.325738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.548 [2024-11-20 17:21:26.325747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.548 [2024-11-20 17:21:26.325757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.548 [2024-11-20 17:21:26.325763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.548 [2024-11-20 17:21:26.337930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.548 [2024-11-20 17:21:26.338341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-11-20 17:21:26.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.548 [2024-11-20 17:21:26.338367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.548 [2024-11-20 17:21:26.338541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.548 [2024-11-20 17:21:26.338716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.548 [2024-11-20 17:21:26.338724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.548 [2024-11-20 17:21:26.338730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.548 [2024-11-20 17:21:26.338737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.548 [2024-11-20 17:21:26.350978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.548 [2024-11-20 17:21:26.351386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-11-20 17:21:26.351403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.548 [2024-11-20 17:21:26.351412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.548 [2024-11-20 17:21:26.351586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.548 [2024-11-20 17:21:26.351760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.548 [2024-11-20 17:21:26.351769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.548 [2024-11-20 17:21:26.351776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.548 [2024-11-20 17:21:26.351783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.548 [2024-11-20 17:21:26.364028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.548 [2024-11-20 17:21:26.364460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.548 [2024-11-20 17:21:26.364476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.548 [2024-11-20 17:21:26.364484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.548 [2024-11-20 17:21:26.364658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.548 [2024-11-20 17:21:26.364832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.548 [2024-11-20 17:21:26.364841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.548 [2024-11-20 17:21:26.364849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.548 [2024-11-20 17:21:26.364856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.548 [2024-11-20 17:21:26.377096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.548 [2024-11-20 17:21:26.377435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.377452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.377460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.377633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.377807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.377817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.377823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.377830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.390043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.390483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.390500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.390507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.390681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.390856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.390864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.390871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.390877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.400315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:08.549 [2024-11-20 17:21:26.403127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.403579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.403596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.403603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.403777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.403951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.403959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.403966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.403972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.416069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.416508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.416537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.416711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.416883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.416892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.416899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.416905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.429151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.429516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.429540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.429713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.429887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.429896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.429902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.429908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.442114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.549 [2024-11-20 17:21:26.442137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.549 [2024-11-20 17:21:26.442144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.549 [2024-11-20 17:21:26.442149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.549 [2024-11-20 17:21:26.442154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.549 [2024-11-20 17:21:26.442210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.442642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.442659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.442667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.442840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.443018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.443026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.443033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.443039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.443458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.549 [2024-11-20 17:21:26.443494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.549 [2024-11-20 17:21:26.443494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.549 [2024-11-20 17:21:26.455312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.455763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.455783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.455792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.455967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.456142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.456151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.456158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.456166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.468400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.468787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.468806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.468815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.468989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.469165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.469173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.469180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.469187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.481443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.481885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.481905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.549 [2024-11-20 17:21:26.481913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.549 [2024-11-20 17:21:26.482087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.549 [2024-11-20 17:21:26.482266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.549 [2024-11-20 17:21:26.482275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.549 [2024-11-20 17:21:26.482282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.549 [2024-11-20 17:21:26.482290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.549 [2024-11-20 17:21:26.494521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.549 [2024-11-20 17:21:26.494962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.549 [2024-11-20 17:21:26.494982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.494990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.495165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.495344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.495353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.495361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.495369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.507604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.508028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.508046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.508055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.508235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.508410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.508418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.508425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.508432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.520657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.521091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.521108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.521116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.521295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.521470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.521478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.521485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.521491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.533710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.534139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.534155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.534167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.534346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.534520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.534528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.534535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.534541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.546776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.547214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.547230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.547237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.547411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.547584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.547592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.547599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.547605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.559816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.560168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.560184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.560191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.560367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.560541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.560550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.560556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.560562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.572938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.573343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.573360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.573367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.573540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.573716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.573724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.573731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.573737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.550 [2024-11-20 17:21:26.585952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.550 [2024-11-20 17:21:26.586222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.550 [2024-11-20 17:21:26.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.550 [2024-11-20 17:21:26.586246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.550 [2024-11-20 17:21:26.586419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.550 [2024-11-20 17:21:26.586593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.550 [2024-11-20 17:21:26.586601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.550 [2024-11-20 17:21:26.586607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.550 [2024-11-20 17:21:26.586613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.599017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.599445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.599462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.599469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.599642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.599816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.599824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.599830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.599836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.612055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.612474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.612490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.612497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.612669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.612843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.612851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.612862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.612868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.625110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.625437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.625453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.625460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.625633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.625807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.625814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.625821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.625827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.638205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.638634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.638650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.638658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.638829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.639003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.639011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.639017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.639023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.651259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.651689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.651705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.651712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.651885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.652059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.652067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.652074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.652080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.664316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.664658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.664674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.664681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.664855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.665028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.665036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.665043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.665048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.677281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.677715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.810 [2024-11-20 17:21:26.677731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.810 [2024-11-20 17:21:26.677739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.810 [2024-11-20 17:21:26.677911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.810 [2024-11-20 17:21:26.678085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.810 [2024-11-20 17:21:26.678093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.810 [2024-11-20 17:21:26.678100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.810 [2024-11-20 17:21:26.678105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.810 [2024-11-20 17:21:26.690327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.810 [2024-11-20 17:21:26.690669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.690685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.690693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.690865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.691039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.691047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.691054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.691059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.703457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.703862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.703878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.703888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.704061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.704239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.704248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.704254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.704260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.716482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.716914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.716930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.716937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.717109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.717286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.717294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.717301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.717307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.729548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.729998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.730013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.730021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.730194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.730374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.730383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.730389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.730395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.742612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.743047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.743063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.743070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.743249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.743428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.743436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.743443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.743449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 4779.50 IOPS, 18.67 MiB/s [2024-11-20T16:21:26.854Z] [2024-11-20 17:21:26.755652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.756091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.756108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.756115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.756294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.756469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.756477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.756483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.756489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.768722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.769156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.769172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.769179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.769357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.769531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.769538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.769545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.769551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.781765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.782198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.782219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.782226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.782399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.782571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.782579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.782590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.782596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.794822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.795253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.795270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.795278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.795451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.795625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.795633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.795639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.795645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.807891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.811 [2024-11-20 17:21:26.808328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.811 [2024-11-20 17:21:26.808344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.811 [2024-11-20 17:21:26.808351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.811 [2024-11-20 17:21:26.808524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.811 [2024-11-20 17:21:26.808698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.811 [2024-11-20 17:21:26.808705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.811 [2024-11-20 17:21:26.808712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.811 [2024-11-20 17:21:26.808718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.811 [2024-11-20 17:21:26.820941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.812 [2024-11-20 17:21:26.821373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.812 [2024-11-20 17:21:26.821389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.812 [2024-11-20 17:21:26.821396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.812 [2024-11-20 17:21:26.821568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.812 [2024-11-20 17:21:26.821741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.812 [2024-11-20 17:21:26.821749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.812 [2024-11-20 17:21:26.821756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.812 [2024-11-20 17:21:26.821762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.812 [2024-11-20 17:21:26.833967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.812 [2024-11-20 17:21:26.834400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.812 [2024-11-20 17:21:26.834417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.812 [2024-11-20 17:21:26.834424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.812 [2024-11-20 17:21:26.834597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.812 [2024-11-20 17:21:26.834771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.812 [2024-11-20 17:21:26.834779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.812 [2024-11-20 17:21:26.834786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.812 [2024-11-20 17:21:26.834792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:08.812 [2024-11-20 17:21:26.847016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:08.812 [2024-11-20 17:21:26.847455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.812 [2024-11-20 17:21:26.847471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:08.812 [2024-11-20 17:21:26.847478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:08.812 [2024-11-20 17:21:26.847650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:08.812 [2024-11-20 17:21:26.847823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:08.812 [2024-11-20 17:21:26.847831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:08.812 [2024-11-20 17:21:26.847837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:08.812 [2024-11-20 17:21:26.847843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.070 [2024-11-20 17:21:26.860076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.070 [2024-11-20 17:21:26.860517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-11-20 17:21:26.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.070 [2024-11-20 17:21:26.860542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.070 [2024-11-20 17:21:26.860714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.070 [2024-11-20 17:21:26.860888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.070 [2024-11-20 17:21:26.860896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.070 [2024-11-20 17:21:26.860903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.070 [2024-11-20 17:21:26.860909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.070 [2024-11-20 17:21:26.873127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.070 [2024-11-20 17:21:26.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-11-20 17:21:26.873577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.873591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.873764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.873939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.873946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.873953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.873959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.886176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.886583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.886599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.886606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.886778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.886952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.886960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.886967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.886973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.899199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.899552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.899725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.899898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.899906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.899912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.899918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.912318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.912720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.912737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.912744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.912916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.913092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.913100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.913107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.913113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.925310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.925720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.925736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.925744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.925916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.926089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.926097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.926104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.926110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.938314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.938722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.938738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.938746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.938919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.939093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.939101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.939107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.939113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.951331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.951711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.951728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.951735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.951907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.952081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.952089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.952098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.952105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.964325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.964757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.964773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.964780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.964953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.965126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.965134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.965140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.965146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.977362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.977812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.977819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.977991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.978165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.978173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.978179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.978185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:26.990412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.071 [2024-11-20 17:21:26.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-11-20 17:21:26.990864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.071 [2024-11-20 17:21:26.990871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.071 [2024-11-20 17:21:26.991044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.071 [2024-11-20 17:21:26.991222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.071 [2024-11-20 17:21:26.991231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.071 [2024-11-20 17:21:26.991238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.071 [2024-11-20 17:21:26.991244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.071 [2024-11-20 17:21:27.003474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.003833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.003848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.003856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.004028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.004207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.004216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.004222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.004229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.016454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.016864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.016880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.016888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.017060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.017237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.017246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.017252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.017258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.029468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.029819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.029835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.029843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.030014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.030187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.030195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.030207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.030214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.042588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.042996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.043012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.043023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.043196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.043374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.043382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.043388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.043394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.055622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.055987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.056004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.056012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.056186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.056365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.056374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.056381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.056387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.068592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.069025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.069042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.069049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.069227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.069402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.069411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.069417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.069424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.081637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.081989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.082013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.082185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.082368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.082377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.082383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.082391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.094759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.095179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.095196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.095209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.095382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.095556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.095564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.095571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.095577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.072 [2024-11-20 17:21:27.107805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.072 [2024-11-20 17:21:27.108242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-11-20 17:21:27.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.072 [2024-11-20 17:21:27.108267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.072 [2024-11-20 17:21:27.108439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.072 [2024-11-20 17:21:27.108613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.072 [2024-11-20 17:21:27.108621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.072 [2024-11-20 17:21:27.108629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.072 [2024-11-20 17:21:27.108635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.331 [2024-11-20 17:21:27.120899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.331 [2024-11-20 17:21:27.121248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.331 [2024-11-20 17:21:27.121264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.331 [2024-11-20 17:21:27.121272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.331 [2024-11-20 17:21:27.121649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.331 [2024-11-20 17:21:27.121822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.331 [2024-11-20 17:21:27.121830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.331 [2024-11-20 17:21:27.121837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.331 [2024-11-20 17:21:27.121846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.331 [2024-11-20 17:21:27.133898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.331 [2024-11-20 17:21:27.134306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.331 [2024-11-20 17:21:27.134323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.331 [2024-11-20 17:21:27.134331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.331 [2024-11-20 17:21:27.134505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.134679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.134687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.134694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.134700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.146920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.147310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.147328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.147336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.147510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.147684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.147692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.147699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.147705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.332 [2024-11-20 17:21:27.159937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.160367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.160385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.160392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.160565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.160739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.160747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.160757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.160764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.172994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.173406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.173423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.173430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.173603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.173777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.173785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.173792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.173798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.186042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.186337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.186354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.186361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.186533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.186708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.186716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.186722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.186728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.332 [2024-11-20 17:21:27.199226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.199588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.199605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.199612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.199784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.199957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.199968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.199975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.199981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.201506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.332 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.332 [2024-11-20 17:21:27.212211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.212628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.212636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.212809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.212982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.212989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.212996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.213002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.225226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.225665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.225681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.225689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.225861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.332 [2024-11-20 17:21:27.226035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.332 [2024-11-20 17:21:27.226043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.332 [2024-11-20 17:21:27.226049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.332 [2024-11-20 17:21:27.226055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.332 [2024-11-20 17:21:27.238282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.332 [2024-11-20 17:21:27.238713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.332 [2024-11-20 17:21:27.238729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.332 [2024-11-20 17:21:27.238736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.332 [2024-11-20 17:21:27.238909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.333 [2024-11-20 17:21:27.239087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.333 [2024-11-20 17:21:27.239095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.333 [2024-11-20 17:21:27.239101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.333 [2024-11-20 17:21:27.239107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.333 Malloc0 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.333 [2024-11-20 17:21:27.251314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.333 [2024-11-20 17:21:27.251720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.333 [2024-11-20 17:21:27.251736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.333 [2024-11-20 17:21:27.251743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.333 [2024-11-20 17:21:27.251916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.333 [2024-11-20 17:21:27.252090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.333 [2024-11-20 17:21:27.252098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.333 [2024-11-20 17:21:27.252105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.333 [2024-11-20 17:21:27.252111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.333 [2024-11-20 17:21:27.264319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.333 [2024-11-20 17:21:27.264727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.333 [2024-11-20 17:21:27.264743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01500 with addr=10.0.0.2, port=4420 00:27:09.333 [2024-11-20 17:21:27.264750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd01500 is same with the state(6) to be set 00:27:09.333 [2024-11-20 17:21:27.264923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd01500 (9): Bad file descriptor 00:27:09.333 [2024-11-20 17:21:27.265097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:09.333 [2024-11-20 17:21:27.265104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:09.333 [2024-11-20 17:21:27.265111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:09.333 [2024-11-20 17:21:27.265117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:09.333 [2024-11-20 17:21:27.269363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.333 17:21:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2648882 00:27:09.333 [2024-11-20 17:21:27.277343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:09.333 [2024-11-20 17:21:27.306147] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:10.830 4824.71 IOPS, 18.85 MiB/s [2024-11-20T16:21:29.803Z] 5639.75 IOPS, 22.03 MiB/s [2024-11-20T16:21:31.175Z] 6283.11 IOPS, 24.54 MiB/s [2024-11-20T16:21:32.109Z] 6805.80 IOPS, 26.59 MiB/s [2024-11-20T16:21:33.042Z] 7219.82 IOPS, 28.20 MiB/s [2024-11-20T16:21:33.975Z] 7572.75 IOPS, 29.58 MiB/s [2024-11-20T16:21:34.907Z] 7865.08 IOPS, 30.72 MiB/s [2024-11-20T16:21:35.839Z] 8116.07 IOPS, 31.70 MiB/s [2024-11-20T16:21:35.839Z] 8339.80 IOPS, 32.58 MiB/s 00:27:17.796 Latency(us) 00:27:17.796 [2024-11-20T16:21:35.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:17.796 Verification LBA range: start 0x0 length 0x4000 00:27:17.796 Nvme1n1 : 15.01 8344.38 32.60 13056.61 0.00 5961.71 585.14 13107.20 00:27:17.796 [2024-11-20T16:21:35.839Z] =================================================================================================================== 00:27:17.796 [2024-11-20T16:21:35.839Z] Total : 8344.38 32.60 13056.61 0.00 5961.71 585.14 13107.20 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.055 17:21:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.055 rmmod nvme_tcp 00:27:18.055 rmmod nvme_fabrics 00:27:18.055 rmmod nvme_keyring 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2649966 ']' 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2649966 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2649966 ']' 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2649966 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649966 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649966' 00:27:18.055 killing process with pid 2649966 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2649966 00:27:18.055 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2649966 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.314 17:21:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.850 00:27:20.850 real 0m26.340s 00:27:20.850 user 1m1.619s 00:27:20.850 sys 0m6.916s 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:20.850 ************************************ 00:27:20.850 END TEST nvmf_bdevperf 00:27:20.850 ************************************ 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.850 ************************************ 00:27:20.850 START TEST nvmf_target_disconnect 00:27:20.850 ************************************ 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:20.850 * Looking for test storage... 00:27:20.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:20.850 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.851 --rc genhtml_branch_coverage=1 00:27:20.851 --rc genhtml_function_coverage=1 00:27:20.851 --rc genhtml_legend=1 00:27:20.851 --rc geninfo_all_blocks=1 00:27:20.851 --rc geninfo_unexecuted_blocks=1 00:27:20.851 00:27:20.851 ' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.851 --rc genhtml_branch_coverage=1 00:27:20.851 --rc genhtml_function_coverage=1 00:27:20.851 --rc genhtml_legend=1 00:27:20.851 --rc geninfo_all_blocks=1 00:27:20.851 --rc geninfo_unexecuted_blocks=1 00:27:20.851 00:27:20.851 ' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.851 --rc genhtml_branch_coverage=1 00:27:20.851 --rc genhtml_function_coverage=1 00:27:20.851 --rc genhtml_legend=1 00:27:20.851 --rc geninfo_all_blocks=1 00:27:20.851 --rc geninfo_unexecuted_blocks=1 00:27:20.851 00:27:20.851 ' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:20.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.851 --rc genhtml_branch_coverage=1 00:27:20.851 --rc genhtml_function_coverage=1 00:27:20.851 --rc genhtml_legend=1 00:27:20.851 --rc geninfo_all_blocks=1 00:27:20.851 --rc geninfo_unexecuted_blocks=1 00:27:20.851 00:27:20.851 ' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.851 17:21:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:27.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:27.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:27.419 Found net devices under 0000:86:00.0: cvl_0_0 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:27.419 Found net devices under 0000:86:00.1: cvl_0_1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:27.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:27:27.419 00:27:27.419 --- 10.0.0.2 ping statistics --- 00:27:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.419 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:27:27.419 00:27:27.419 --- 10.0.0.1 ping statistics --- 00:27:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.419 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.419 ************************************ 00:27:27.419 START TEST nvmf_target_disconnect_tc1 00:27:27.419 ************************************ 00:27:27.419 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.420 [2024-11-20 17:21:44.701979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.420 [2024-11-20 17:21:44.702030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b41ab0 with addr=10.0.0.2, port=4420 00:27:27.420 [2024-11-20 17:21:44.702066] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:27.420 [2024-11-20 17:21:44.702079] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:27.420 [2024-11-20 17:21:44.702086] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:27.420 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:27.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:27.420 Initializing NVMe Controllers 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:27.420 00:27:27.420 real 0m0.125s 00:27:27.420 user 0m0.047s 00:27:27.420 sys 0m0.078s 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 ************************************ 00:27:27.420 END TEST nvmf_target_disconnect_tc1 00:27:27.420 ************************************ 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 ************************************ 00:27:27.420 START TEST nvmf_target_disconnect_tc2 00:27:27.420 ************************************ 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2655062 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2655062 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2655062 ']' 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.420 17:21:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 [2024-11-20 17:21:44.843998] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:27.420 [2024-11-20 17:21:44.844042] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.420 [2024-11-20 17:21:44.923348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.420 [2024-11-20 17:21:44.965142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.420 [2024-11-20 17:21:44.965178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.420 [2024-11-20 17:21:44.965186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.420 [2024-11-20 17:21:44.965192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.420 [2024-11-20 17:21:44.965197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.420 [2024-11-20 17:21:44.966762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:27.420 [2024-11-20 17:21:44.966872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:27.420 [2024-11-20 17:21:44.967734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:27.420 [2024-11-20 17:21:44.967737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 Malloc0 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 [2024-11-20 17:21:45.133591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 [2024-11-20 17:21:45.158561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2655167 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:27.420 17:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:29.333 17:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2655062 00:27:29.333 17:21:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Write completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Write completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.333 starting I/O failed 00:27:29.333 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 [2024-11-20 17:21:47.185936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 [2024-11-20 17:21:47.186139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Read completed with error (sct=0, sc=8) 00:27:29.334 starting I/O failed 00:27:29.334 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 [2024-11-20 17:21:47.186336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Write completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 Read completed with error (sct=0, sc=8) 00:27:29.335 starting I/O failed 00:27:29.335 [2024-11-20 17:21:47.186526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:29.335 [2024-11-20 17:21:47.186778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.187808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.187980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.188013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.188211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.188244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.188375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.188407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.188535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.188565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.188753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.188785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.189098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.189129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.189325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.189558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.189590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.189810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.189841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.190107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.190119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.190282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.190292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.190446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.190476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.190658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.335 [2024-11-20 17:21:47.190690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.335 qpair failed and we were unable to recover it. 00:27:29.335 [2024-11-20 17:21:47.190912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.190942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.191187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.191232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.191524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.191556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.191751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.191782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.192022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.192054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.192308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.192338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.192538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.192567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.192749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.192779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.192967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.192995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.193299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.193329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.193451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.193479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.193689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.193718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.193922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.193950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.194222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.194253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.194400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.194429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.194617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.194646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.194930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.194959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.195226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.195256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.195449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.195478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.195582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.195610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.195889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.195918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.196214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.196247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.196399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.196431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.196628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.196659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.196954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.196985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.197179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.197215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.197341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.197370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.197609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.336 [2024-11-20 17:21:47.197637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.336 qpair failed and we were unable to recover it. 00:27:29.336 [2024-11-20 17:21:47.197847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.197881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.198144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.198366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.198395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.198651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.198679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.198867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.198895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.199027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.199055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.199240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.199271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.199454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.199483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.199675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.199703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.199921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.199949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.200083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.200112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.200343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.200564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.200592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.200757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.200786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.200970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.200999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.201227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.201258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.201491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.201520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.201695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.201724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.201986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.202015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.202228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.202258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.202466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.202495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.202782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.202812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.203088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.203120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.203381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.203415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.203659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.203693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.203878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.203909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.204101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.204132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.204332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.204506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.204538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.204732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.337 [2024-11-20 17:21:47.204764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.337 qpair failed and we were unable to recover it. 00:27:29.337 [2024-11-20 17:21:47.205053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.205084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.205218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.205252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.205510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.205542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.205724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.205756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.206000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.206031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.206274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.206308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.206491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.206522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.206696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.206728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.207008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.207040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.207335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.207368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.207505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.207537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.207658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.207690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.207927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.208189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.208242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.208375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.208406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.208577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.208608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.208812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.208842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.209030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.209062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.209249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.209282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.209548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.209579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.209692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.209723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.209969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.210001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.210311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.210344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.210645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.210676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.210927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.210958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.211229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.211262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.211454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.211486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.211698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.211729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.338 qpair failed and we were unable to recover it. 00:27:29.338 [2024-11-20 17:21:47.211953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.338 [2024-11-20 17:21:47.211984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.212250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.212284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.212409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.212441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.212689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.212722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.212968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.213269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.213302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.213492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.213524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.213718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.213750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.214003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.214035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.214227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.214260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.214474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.214516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.214705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.214738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.214856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.214888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.215085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.215115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.215381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.215606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.215638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.215772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.216045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.216077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.216288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.216321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.216501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.216532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.216774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.216806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.216991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.217021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.217241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.217406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.217437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.217574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.339 [2024-11-20 17:21:47.217607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.339 qpair failed and we were unable to recover it. 00:27:29.339 [2024-11-20 17:21:47.217893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.217924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.218056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.218090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.218242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.218275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.218515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.218546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.218733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.218763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.218974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.219223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.219255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.219441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.219474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.219607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.219640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.219760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.219791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.220056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.220092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.220257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.220380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.220416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.220546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.220756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.220789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.221082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.221114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.221344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.221378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.221574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.221605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.221725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.221756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.222023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.222055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.222328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.222361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.222646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.222678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.222866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.222898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.223138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.223169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.223372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.223619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.223650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.223883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.223915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.224184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.224223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.224430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.224462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.224655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.224687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.224957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.225252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.225286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.225497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.225529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.225715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.340 [2024-11-20 17:21:47.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.340 qpair failed and we were unable to recover it. 00:27:29.340 [2024-11-20 17:21:47.225947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.225979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.226253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.226287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.226536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.226567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.226867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.226900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.227160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.227192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.227406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.227439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.227634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.227667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.227927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.227959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.228170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.228224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.228489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.228522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.228716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.228747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.228935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.228966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.229176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.229217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.229455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.229487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.229746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.229778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.229956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.230239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.230272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.230468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.230500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.230653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.230684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.230968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.231000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.231294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.231327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.231503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.231535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.231806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.231837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.232113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.232428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.232460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.232650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.232681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.232941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.232973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.233150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.233181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.233454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.233490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.233736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.233768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.234033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.341 [2024-11-20 17:21:47.234065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.341 qpair failed and we were unable to recover it. 00:27:29.341 [2024-11-20 17:21:47.234254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.234286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.234419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.234718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.234751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.234925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.234957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.235162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.235194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.235395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.235429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.235674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.235705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.236014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.236046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.236319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.236356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.236536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.236568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.236759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.236790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.237032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.237064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.237197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.237238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.237460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.237725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.237757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.237943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.237981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.238247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.238281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.238463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.238495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.238685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.238717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.238989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.239021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.239274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.239308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.239492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.239524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.239701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.239733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.239974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.240005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.240297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.240331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.240600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.240632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.240922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.240954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.241228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.241261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.241503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.241747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.241779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.242019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.242051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.342 qpair failed and we were unable to recover it. 00:27:29.342 [2024-11-20 17:21:47.242296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.342 [2024-11-20 17:21:47.242330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.242565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.242597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.242809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.242841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.243110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.243141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.243469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.243711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.243743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.243982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.244013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.244242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.244276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.244521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.244552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.244749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.244781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.244974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.245006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.245191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.245238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.245468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.245727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.245758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.245945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.245977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.246173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.246212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.246343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.246576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.246608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.246890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.246923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.247188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.247461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.247493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.247714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.247917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.247949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.248160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.248191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.248463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.248816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.249006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.249038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.249297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.249331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.249502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.249721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.249753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.250057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.250088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.343 [2024-11-20 17:21:47.250288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.343 [2024-11-20 17:21:47.250322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.343 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.250564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.250596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.250863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.250895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.251183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.251361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.251637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.251668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.251881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.251913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.252097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.252134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.252376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.252411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.252540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.252572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.252859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.252891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.253186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.253227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.253446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.253478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.253672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.253704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.253887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.253919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.254096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.254128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.254321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.254355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.254597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.254629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.254894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.254926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.255221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.255471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.255503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.255743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.256010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.256043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.256234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.256268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.256531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.256565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.256808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.256840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.257092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.257123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.257367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.257402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.257588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.257620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.257862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.257893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.258098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.258131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.258376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.258410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.258610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.344 [2024-11-20 17:21:47.258642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.344 qpair failed and we were unable to recover it. 00:27:29.344 [2024-11-20 17:21:47.258760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.258792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.259037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.259070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.259264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.259297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.259531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.259563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.259806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.259839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.260094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.260125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.260305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.260339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.260464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.260496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.260761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.260792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.260932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.260964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.261143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.261176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.261425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.261807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.261843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.262050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.262083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.262281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.262315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.262522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.262553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.262747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.262778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.263047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.263079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.263268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.263301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.263500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.263531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.263796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.263827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.264078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.264109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.264379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.264624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.264655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.264901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.264931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.345 [2024-11-20 17:21:47.265127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.345 [2024-11-20 17:21:47.265158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.345 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.265418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.265451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.265580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.265610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.265868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.265906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.266034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.266065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.266276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.266310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.266575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.266606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.266794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.266825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.267022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.267053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.267315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.267347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.267542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.267573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.267844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.267875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.268059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.268090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.268309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.268341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.268476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.268506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.268697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.268727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.268929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.268960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.269266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.269300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.269551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.269583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.269789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.269820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.270036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.270067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.270251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.270284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.270412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.270442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.270655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.270685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.270926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.270957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.271157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.271188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.271389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.271420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.271671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.271702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.272006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.272037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.272241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.272420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.346 [2024-11-20 17:21:47.272452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.346 qpair failed and we were unable to recover it. 00:27:29.346 [2024-11-20 17:21:47.272651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.272681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.272870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.272901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.273099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.273130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.273323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.273355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.273599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.273707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.273736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.273948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.273979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.274162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.274193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.274385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.274416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.274616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.274648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.274897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.274927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.275102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.275133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.275329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.275368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.275622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.275652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.275791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.275821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.276079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.276110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.276402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.276434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.276691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.276857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.276889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.277076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.277106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.277383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.277416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.277694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.277726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.277846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.277877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.278063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.278284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.278507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.278872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.278996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.279026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.347 qpair failed and we were unable to recover it. 00:27:29.347 [2024-11-20 17:21:47.279140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.347 [2024-11-20 17:21:47.279169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.279474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.279507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.279718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.279750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.279941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.279971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.280153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.280183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.280739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.280770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.280904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.281043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.281073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.281192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.281236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.281508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.281584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.281788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.281826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.282052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.282246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.282396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.282541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.282782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.282972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.283004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.283259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.283443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.283474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.283739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.283987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.284019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.284238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.284271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.284400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.284431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.284668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.284701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.284944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.284976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.285183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.285228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.285351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.285381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.285651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.285683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.285924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.285956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.348 [2024-11-20 17:21:47.286136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.348 [2024-11-20 17:21:47.286168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.348 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.286435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.286468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.286667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.286699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.286893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.286932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.287175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.287217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.287438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.287470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.287667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.287699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.287911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.287955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.288142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.288175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.288369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.288585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.288617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.288794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.288825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.288997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.289029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.289285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.289319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.289427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.289458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.289649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.289680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.289811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.289844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.290092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.290124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.290319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.290353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.290648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.290778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.290809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.291026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.291058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.291252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.291284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.291525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.291556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.291679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.291711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.291839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.291872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.292051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.292082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.292193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.292241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.292417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.292448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.292634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.292666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.292852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.292884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.293130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.293163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.293375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.349 [2024-11-20 17:21:47.293408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.349 qpair failed and we were unable to recover it. 00:27:29.349 [2024-11-20 17:21:47.293605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.293637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.293821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.293858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.294043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.294075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.294278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.294312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.294506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.294538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.294736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.294768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.294961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.295166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.295198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.295396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.295428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.295548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.295580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.295690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.295722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.295984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.296016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.296243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.296433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.296464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.296673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.296705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.296929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.296961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.297184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.297226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.297426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.297457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.297636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.297667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.297915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.297948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.298225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.298258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.298444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.298476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.298670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.298702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.298920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.298951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.299141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.299174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.299459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.299491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.299609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.299641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.299936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.300219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.300453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.300485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.300750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.300782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.350 [2024-11-20 17:21:47.300968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.350 [2024-11-20 17:21:47.301000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.350 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.301173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.301215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.301513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.301545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.301763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.301796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.302036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.302068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.302315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.302349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.302596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.302628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.302809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.302841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.302969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.303001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.303165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.303416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.303450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.303673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.303705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.303822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.303854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.304054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.304086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.304341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.304500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.304532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.304719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.304751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.304926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.304957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.305255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.305289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.305483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.305516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.305722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.305754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.305950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.305981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.306250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.306413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.306565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.351 [2024-11-20 17:21:47.306597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.351 qpair failed and we were unable to recover it. 00:27:29.351 [2024-11-20 17:21:47.306780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.306812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.307082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.307304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.307337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.307508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.307541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.307738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.307769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.308061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.308238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.308463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.308647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.308810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.308993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.309025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.309213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.309246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.309370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.309402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.309589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.309626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.309835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.309866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.310002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.310034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.310278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.310312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.310508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.310539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.310725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.310758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.311022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.311055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.311287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.311321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.311515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.311547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.311814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.311846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.312046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.312079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.312266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.312299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.312420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.312453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.312640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.312672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.312862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.312894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.313163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.313195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.313422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.313455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.313647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.313679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.313933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.313964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.314217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.314252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.352 qpair failed and we were unable to recover it. 00:27:29.352 [2024-11-20 17:21:47.314461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.352 [2024-11-20 17:21:47.314493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.314697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.314729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.314926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.314958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.315216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.315249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.315448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.315480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.315734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.315765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.315883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.315914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.316104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.316142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.316327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.316360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.316551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.316583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.316824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.316856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.316973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.317005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.317196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.317239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.317435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.317467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.317668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.317700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.317836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.317867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.318096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.318128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.318372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.318406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.318650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.318681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.318890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.319049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.319328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.319361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.319499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.319531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.319671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.319704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.319970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.320001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.320122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.320153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.320351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.320384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.320654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.320686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.320858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.320889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.321307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.353 [2024-11-20 17:21:47.321340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.353 qpair failed and we were unable to recover it. 00:27:29.353 [2024-11-20 17:21:47.321540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.321571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.321696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.321729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.321844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.321877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.322086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.322124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.322248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.322282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.322486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.322518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.322782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.322814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.323062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.323095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.323360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.323393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.323583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.323615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.323853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.323885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.324079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.324111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.324349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.324382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.324594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.324626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.324889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.324921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.325033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.325066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.325241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.325274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.325396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.325428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.325628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.325660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.325916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.325948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.326082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.326299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.326333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.326526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.326557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.326743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.326776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.326987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.327018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.327286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.327320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.327511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.327543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.327819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.327850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.328090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.328123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.354 qpair failed and we were unable to recover it. 00:27:29.354 [2024-11-20 17:21:47.328318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.354 [2024-11-20 17:21:47.328351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.328473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.328505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.328779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.328811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.329031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.329063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.329333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.329366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.329556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.329587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.329710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.329741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.329929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.329961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.330138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.330170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.330455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.330583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.330615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.330821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.330854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.331028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.331059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.331248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.331282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.331491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.331523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.331701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.331733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.331924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.331956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.332173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.332211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.332337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.332370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.332479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.332511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.332796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.332828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.333925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.333957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.334132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.334164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.334735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.334767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.355 [2024-11-20 17:21:47.334985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.355 [2024-11-20 17:21:47.335018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.355 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.335288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.335321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.335442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.335474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.335668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.335955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.336147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.336179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.336383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.336415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.336572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.336747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.336779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.337019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.337051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.337340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.337374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.337589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.337621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.337841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.338016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.338047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.338298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.338331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.338465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.338497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.338672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.338703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.338919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.338950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.339139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.339170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.339354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.339387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.339594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.339626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.339836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.339868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.340012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.340221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.340254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.340505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.340536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.340778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.340810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.341021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.341135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.341166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.341361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.341394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.341588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.356 [2024-11-20 17:21:47.341619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.356 qpair failed and we were unable to recover it. 00:27:29.356 [2024-11-20 17:21:47.341805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.341837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.342032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.342063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.342254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.342287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.342420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.342452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.342639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.342669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.343014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.343045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.343151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.343182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.343433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.343465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.343686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.343724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.343998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.344274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.344307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.344480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.344512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.344752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.344784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.345032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.345063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.345193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.345233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.345474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.345506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.345697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.345729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.345908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.345940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.346129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.346161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.346355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.346388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.346650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.346681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.346942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.346973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.347267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.347385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.347416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.347678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.347710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.347955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.347987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.348162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.348193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.348328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.348360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.348549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.357 [2024-11-20 17:21:47.348581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.357 qpair failed and we were unable to recover it. 00:27:29.357 [2024-11-20 17:21:47.348767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.348799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.349060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.349092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.349289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.349344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.349610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.349850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.349882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.350012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.350044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.350251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.350285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.350478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.350511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.350634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.350665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.350838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.350870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.351087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.351119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.351308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.351342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.351530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.351562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.351783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.352072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.352104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.352270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.352518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.352548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.352756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.352789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.353055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.353086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.353278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.353311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.353598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.353630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.353885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.353917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.354160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.354191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.354440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.354472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.354642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.354674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.354942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.354973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.355248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.355282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.355411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.355663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.355695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.355881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.358 [2024-11-20 17:21:47.356124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.358 [2024-11-20 17:21:47.356156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.358 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.356414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.356446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.356629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.356661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.356789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.356821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.357069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.357101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.357341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.357375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.357555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.357837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.357869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.358114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.358146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.358339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.358373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.358553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.359 [2024-11-20 17:21:47.358715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.359 [2024-11-20 17:21:47.358747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.359 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.358864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.358896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.359085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.359116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.359366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.359400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.359553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.359746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.359777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.360006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.360285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.360410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.360442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.360637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.360670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.360887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.360918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.361096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.361128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.361302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.361335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.361512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.361543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.361729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.361761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.361882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.361914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.362111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.362143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.362273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.362305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.362422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.362454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.362645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.362676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.362925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.362957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.363063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.363095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.363358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.363391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.363515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.363547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.363791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.363823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.364004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.364036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.364151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.364183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.646 qpair failed and we were unable to recover it. 00:27:29.646 [2024-11-20 17:21:47.364382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.646 [2024-11-20 17:21:47.364414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.364599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.364631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.364753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.364785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.364905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.364936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.365200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.365261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.365364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.365395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.365612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.365649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.365824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.365855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.366035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.366067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.366196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.366236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.366480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.366512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.366681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.366712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.366884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.366916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.367104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.367136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.367253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.367287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.367397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.367636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.367668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.367911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.367943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.368116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.368147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.368295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.368514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.368546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.368685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.368716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.368893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.368924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.369101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.369133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.369320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.369353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.369481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.369513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.369731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.369763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.369957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.369990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.370100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.370132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.370386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.370420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.370539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.370569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.370791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.371002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.371237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.371276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.371449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.371480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.371668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.371700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.371888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.371919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.372056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.647 [2024-11-20 17:21:47.372089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.647 qpair failed and we were unable to recover it. 00:27:29.647 [2024-11-20 17:21:47.372357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.372391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.372561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.372592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.372845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.372878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.372989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.373153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.373330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.373547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.373753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.373906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.373937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.374126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.374158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.374362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.374395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.374531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.374563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.374744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.374775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.374975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.375007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.375249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.375283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.375458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.375672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.375704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.375905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.376106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.376313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.376346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.376527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.376560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.376671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.376703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.376908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.376939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.377145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.377331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.377364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.377553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.377584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.377779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.377811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.377998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.378029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.378152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.378183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.378447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.378480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.378588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.378620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.378843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.379017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.379050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.379244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.379278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.379475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.379506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.379705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.379921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.379954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.380086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.648 [2024-11-20 17:21:47.380117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.648 qpair failed and we were unable to recover it. 00:27:29.648 [2024-11-20 17:21:47.380310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.380344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.380608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.380640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.380833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.380864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.381053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.381085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.381221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.381253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.381431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.381462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.381600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.381631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.381830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.381861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.382075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.382106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.382291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.382325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.382513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.382544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.382666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.382697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.382888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.382920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.383059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.383091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.383266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.383300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.383563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.383839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.383870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.384125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.384157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.384285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.384319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.384513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.384545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.384667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.384699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.384884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.384916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.385148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.385324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.385484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.385640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.385868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.385998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.386213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.386369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.386813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.386958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.386989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.387110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.387141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.387277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.387309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.387508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.387539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.387641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.387672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.649 qpair failed and we were unable to recover it. 00:27:29.649 [2024-11-20 17:21:47.387845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.649 [2024-11-20 17:21:47.387877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.388043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.388270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.388478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.388626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.388838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.388969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.389000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.389189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.389249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.389438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.389469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.389663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.389695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.389865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.389896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.390016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.390048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.390248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.390281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.390526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.390558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.390738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.390770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.390946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.390984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.391183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.391222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.391360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.391393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.391520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.391552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.391732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.391764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.391866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.391898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.392885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.392916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.393086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.393118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.393294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.393327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.393445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.393477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.393665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.393697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.393951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.393983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.650 qpair failed and we were unable to recover it. 00:27:29.650 [2024-11-20 17:21:47.394107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.650 [2024-11-20 17:21:47.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.394328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.394362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.394561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.394592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.394794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.394827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.394998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.395029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.395252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.395285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.395468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.395498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.395607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.395638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.395806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.395838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.396032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.396063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.396275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.396308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.396418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.396449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.396693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.396725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.396916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.396947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.397160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.397193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.397346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.397378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.397566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.397599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.397719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.397750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.397928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.397959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.398950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.398982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.399095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.399127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.399316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.399350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.399522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.399553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.399727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.399758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.399960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.399990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.400182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.400224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.400401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.400434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.400677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.400708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.400964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.401180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.401232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.401421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.401453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.401648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.401680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.401870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.401900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.651 [2024-11-20 17:21:47.402168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.651 [2024-11-20 17:21:47.402200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.651 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.402425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.402456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.402612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.402791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.402823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.402993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.403025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.403211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.403244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.403364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.403394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.403636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.403667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.403770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.403801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.403995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.404026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.404130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.404162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.404441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.404474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.404771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.404803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.404975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.405013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.405160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.405192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.405468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.405501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.405654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.405846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.405877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.405983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.406013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.406196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.406238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.406428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.406460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.406635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.406666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.406936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.407118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.407150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.407335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.407366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.407552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.407583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.407762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.407794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.407935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.407966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.408143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.408176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.408421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.408723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.408764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.408910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.409292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.409441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.409486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.409641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.409689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.409892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.409930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.410130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.410168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.410427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.410462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.652 qpair failed and we were unable to recover it. 00:27:29.652 [2024-11-20 17:21:47.410654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.652 [2024-11-20 17:21:47.410686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.410803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.410834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.410947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.410984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.411176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.411218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.411495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.411527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.411703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.411735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.411859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.411893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.412089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.412121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.412311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.412344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.412534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.412567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.412752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.412783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.412973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.413005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.413222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.413255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.413511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.413544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.413688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.413719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.413901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.413932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.414122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.414153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.414389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.414578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.414610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.414827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.414858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.415064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.415096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.415311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.415345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.415495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.415526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.415734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.415765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.416032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.416064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.416331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.416364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.416500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.416532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.416714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.416744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.416863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.416893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.417959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.417991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.418178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.418407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.418441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.418627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.653 [2024-11-20 17:21:47.418774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.653 [2024-11-20 17:21:47.418805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.653 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.418985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.419147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.419310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.419532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.419685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.419840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.419872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.420944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.420974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.421110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.421141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.421263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.421295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.421398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.421429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.421624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.421657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.421854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.421885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.422021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.422063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.422178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.422216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.422413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.422446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.422703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.422734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.422862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.422894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.423091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.423122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.423348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.423475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.423506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.423684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.423715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.423836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.423869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.424937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.424967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.425149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.425182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.425408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.425522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.425553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.425758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.425789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.654 [2024-11-20 17:21:47.425902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 17:21:47.425934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.654 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.426949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.427132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.427162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.427353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.427571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.427605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.427720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.427753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.427875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.427904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.428958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.428990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.429162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.429193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.429326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.429357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.429499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.429531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.429653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.429683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.429865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.429897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.430104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.430311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.430456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.430614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.430815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.430986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.431138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.431350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.431560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.431710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.655 [2024-11-20 17:21:47.431849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 17:21:47.431881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.655 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.431982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.432012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.432280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.432315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.432494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.432525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.432702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.432734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.432870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.432903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.433924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.433957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.434093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.434570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.434724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.434868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.434972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.435847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.435979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.436010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.436136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.436168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.436373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.436407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.436610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.436643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.436770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.436801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.437023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.437150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.437181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.437333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.437366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.437474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.437768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.437800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.438047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.438513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.438735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.656 [2024-11-20 17:21:47.438880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.656 qpair failed and we were unable to recover it. 00:27:29.656 [2024-11-20 17:21:47.438999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.439144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.439294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.439501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.439715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.439871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.439904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.440082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.440114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.440220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.440253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.440421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.440453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.440644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.440675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.440852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.440884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.441061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.441094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.441244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.441279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.441429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.441612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.441642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.441750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.441781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.442017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.442048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.442225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.442259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.442373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.442404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.442539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.442570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.442760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.442791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.443036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.443276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.443308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.443547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.443578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.443707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.443738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.443852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.443883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.657 [2024-11-20 17:21:47.444007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.657 [2024-11-20 17:21:47.444037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.657 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.444248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.444280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.444403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.444436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.444551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.444582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.444761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.444791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.444971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.445003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.445236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.445269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.445461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.445492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.445730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.445771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.445951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.445983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.446088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.446119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.446304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.446338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.446541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.446782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.446814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.446948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.446980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.447109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.447140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.447271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.447304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.447422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.447453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.447625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.447656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.447894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.447977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.448224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.448260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.448488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.448670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.448896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.448927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.449061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.449092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.658 [2024-11-20 17:21:47.449283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.658 [2024-11-20 17:21:47.449315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.658 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.449524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.449827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.449859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.449985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.450212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.450372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.450529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.450705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.450948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.450980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.451103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.451134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.451399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.451434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.451588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.451723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.451754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.451939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.451971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.452218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.452250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.452385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.452415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.452544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.452575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.452684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.452715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.452884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.452915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.453903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.453933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.454112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.454143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.454397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.659 [2024-11-20 17:21:47.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.659 qpair failed and we were unable to recover it. 00:27:29.659 [2024-11-20 17:21:47.454554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.454585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.454762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.454793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.454908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.454939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.455130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.455163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.455384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.455418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.455614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.455647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.455777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.455807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.455932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.455965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.456148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.456180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.456376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.456410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.456704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.456736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.456912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.456945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.457093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.457124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.457341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.457373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.457502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.457532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.457642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.457673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.457847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.457877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.458927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.458956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.459109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.660 [2024-11-20 17:21:47.459282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.660 qpair failed and we were unable to recover it. 00:27:29.660 [2024-11-20 17:21:47.459410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.459440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.459571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.459603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.459723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.459753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.459941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.459971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.460075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.460107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.460290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.460323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.460522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.460552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.460662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.460691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.460895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.460927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.461046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.461076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.461254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.461287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.461468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.461500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.461686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.461719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.461825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.461858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.462068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.462240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.462624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.462864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.462986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.463016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.463150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.661 [2024-11-20 17:21:47.463182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.661 qpair failed and we were unable to recover it. 00:27:29.661 [2024-11-20 17:21:47.463366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.463399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.463667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.463739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.463966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.464129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.464525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.464674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.464838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.464869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.465876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.465910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.466093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.466358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.466501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.466546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.466672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.466708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.466846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.466876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.467055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.467234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.467340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.467373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.467578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.467609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.467783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.467815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.468016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.468048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.468158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.468191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.662 qpair failed and we were unable to recover it. 00:27:29.662 [2024-11-20 17:21:47.468336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.662 [2024-11-20 17:21:47.468367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.468476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.468508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.468641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.468672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.468778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.468811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.468932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.468964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.469163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.469194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.469337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.469368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.469572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.469603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.469761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.469943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.469974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.470115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.470145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.470280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.470315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.470493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.470524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.470630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.470663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.470839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.471111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.471150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.471346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.471379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.471513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.471544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.471733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.471764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.471883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.471916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.472054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.472212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.472374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.472609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.663 [2024-11-20 17:21:47.472815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.663 qpair failed and we were unable to recover it. 00:27:29.663 [2024-11-20 17:21:47.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.472968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.473157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.473189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.473327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.473360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.473562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.473592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.473788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.473819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.473995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.474025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.474214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.474247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.474364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.474396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.474638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.474669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.474773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.474805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.474988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.475019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.475194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.475263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.475372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.475403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.475632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.475766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.476002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.476034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.476220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.476254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.476385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.476421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.476669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.476700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.476973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.477016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.477252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.477471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.477676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.477868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.477902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.478091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.478124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.664 [2024-11-20 17:21:47.478256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.664 [2024-11-20 17:21:47.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.664 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.478559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.478591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.478700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.478732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.479966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.479998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.480239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.480272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.480455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.480487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.480612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.480643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.480830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.480862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.481121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.481152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.481279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.481313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.481491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.481522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.481722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.481755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.481928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.481959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.482140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.482173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.482445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.482479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.482730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.482762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.483098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.483270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.483437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.483575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.483846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.483975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.665 [2024-11-20 17:21:47.484007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.665 qpair failed and we were unable to recover it. 00:27:29.665 [2024-11-20 17:21:47.484221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.484254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.484389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.484421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.484659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.484690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.484866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.484898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.485107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.485304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.485337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.485485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.485670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.485701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.485830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.485862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.486047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.486320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.486352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.486479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.486511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.486729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.486761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.486951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.486983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.487133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.487258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.487290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.487533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.487565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.487758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.487789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.487929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.488982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.489110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.489141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.666 qpair failed and we were unable to recover it. 00:27:29.666 [2024-11-20 17:21:47.489272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.666 [2024-11-20 17:21:47.489306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.489405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.489434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.489615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.489648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.489831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.489864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.490130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.490163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.490376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.490409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.490681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.490715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.490902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.490933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.491050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.491082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.491211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.491246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.491384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.491415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.491605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.491636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.491925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.491957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.492144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.492175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.492332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.492364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.492480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.492511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.492619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.492650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.492839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.492873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.493118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.493150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.493299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.493340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.493447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.493486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.493686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.493717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.493914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.493946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.494128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.494158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.494297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.494544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.494573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.667 qpair failed and we were unable to recover it. 00:27:29.667 [2024-11-20 17:21:47.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.667 [2024-11-20 17:21:47.494808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.495046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.495075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.495253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.495292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.495512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.495543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.495740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.495771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.495880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.495912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.496157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.496188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.496428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.496462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.496592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.496622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.498012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.498064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.498360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.498396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.498661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.498694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.498927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.498959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.499146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.499178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.499371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.499403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.499578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.499610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.499794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.499826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.500016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.500047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.500227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.500259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.500417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.500599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.500631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.668 [2024-11-20 17:21:47.500806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.668 [2024-11-20 17:21:47.500836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.668 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.500954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.500985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.501245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.501279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.501402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.501433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.501563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.501594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.501773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.501804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.501932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.502111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.502232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.502264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.502438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.502471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.502702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.502734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.502856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.502887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.503090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.503324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.503357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.503536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.503567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.503693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.503723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.503975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.504006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.504137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.504170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.504512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.504584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.504762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.504816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.504987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.505035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.505175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.505238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.505456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.505495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.505726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.505765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.505911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.505957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.506225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.506273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.506492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.669 [2024-11-20 17:21:47.506531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.669 qpair failed and we were unable to recover it. 00:27:29.669 [2024-11-20 17:21:47.506728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.506767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.507074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.507113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.507313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.507354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.507562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.507599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.507804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.507972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.508464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.508608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.508779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.508932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.508961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.509945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.509977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.510130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.510160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.510415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.510450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.510583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.510614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.510734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.510765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.510868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.510899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.511095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.511127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.511270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.511303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.511551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.511584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.511759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.670 [2024-11-20 17:21:47.511797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.670 qpair failed and we were unable to recover it. 00:27:29.670 [2024-11-20 17:21:47.511925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.511956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.512156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.512188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.512307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.512340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.512531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.512564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.512738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.512771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.512959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.512992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.513964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.514077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.514109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.514327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.514363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.514504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.514537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.514667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.514699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.514937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.514968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.515118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.515368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.515582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.515739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.515891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.515997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.516031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.516164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.516197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.516445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.516479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.516607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.671 [2024-11-20 17:21:47.516639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.671 qpair failed and we were unable to recover it. 00:27:29.671 [2024-11-20 17:21:47.516762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.516794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.516982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.517183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.517400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.517572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.517723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.517861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.517894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.518800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.518832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.519011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.519049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.519291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.519325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.519505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.519537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.519668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.519700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.519826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.519857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.520063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.520337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.520492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.520718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.520866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.520999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.521140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.521351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.521568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.521800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.521937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.522952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.522986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.523159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.523189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.523309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.523341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.523528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.523559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.523668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.523699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.672 [2024-11-20 17:21:47.523913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.672 [2024-11-20 17:21:47.523947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.672 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.524155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.524187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.524310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.524342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.524624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.524834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.525009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.525042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.525168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.525417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.525450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.525695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.525727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.525897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.525930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.526135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.526168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.526362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.526395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.526566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.526599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.526772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.526806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.526924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.526961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.527082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.527123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.527315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.527350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.527586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.527617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.527740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.527772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.527908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.527941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.528118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.528151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.528294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.528327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.528513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.528546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.528789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.528821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.528992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.529024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.529222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.529255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.529431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.529464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.673 qpair failed and we were unable to recover it. 00:27:29.673 [2024-11-20 17:21:47.529578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.673 [2024-11-20 17:21:47.529610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.529847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.529880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.530003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.530036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.530221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.530255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.530499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.530531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.530651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.530684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.530928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.530962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.531134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.531167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.531357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.531391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.531560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.531592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.531716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.531747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.531869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.531900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.532121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.532279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.532426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.532654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.532808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.532981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.533013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.533138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.533170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.533312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.533345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.533515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.533548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.674 [2024-11-20 17:21:47.533724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.674 [2024-11-20 17:21:47.533756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.674 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.533867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.533899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.534142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.534175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.534430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.534464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.534568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.534600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.534866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.534897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.535167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.535212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.535421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.535455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.535652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.535684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.535806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.535837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.536099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.536131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.536270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.536303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.536494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.536525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.536652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.536684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.536865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.536896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.537069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.537224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.537257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.537445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.537477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.537718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.537751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.537949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.537980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.538217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.538433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.538464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.538589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.538619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.538865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.538897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.539185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.539226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.539352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.539383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.675 [2024-11-20 17:21:47.539490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.675 [2024-11-20 17:21:47.539524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.675 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.539710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.539741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.539918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.539950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.540108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.540301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.540334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.540575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.540608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.540796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.540829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.541048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.541083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.541263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.541298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.541507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.541540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.541808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.541842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.541959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.541991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.542169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.542213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.542340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.542375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.542555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.542590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.542804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.543003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.543037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.543222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.543255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.543451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.543485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.543752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.543788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.543983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.544148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.544381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.544530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.544565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.544810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.544842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.545023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.545056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.676 [2024-11-20 17:21:47.545174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.676 [2024-11-20 17:21:47.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.676 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.545341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.545372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.545478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.545511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.545766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.545798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.546948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.546982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.547955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.547989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.548164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.548197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.548317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.548348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.548611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.548645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.548830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.548862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.548984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.549017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.549304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.549339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.549539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.549572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.549761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.549793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.549911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.550117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.550150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.550336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.677 [2024-11-20 17:21:47.550370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.677 qpair failed and we were unable to recover it. 00:27:29.677 [2024-11-20 17:21:47.550489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.550523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.550762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.550793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.550970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.551197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.551428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.551649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.551794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.551952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.551989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.552223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.552256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.552391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.552422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.552610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.552643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.552776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.552953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.552985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.553172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.553216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.553464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.553497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.553699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.553733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.553948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.554124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.554157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.554322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.554356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.554473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.554506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.554682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.554715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.554908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.554942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.555045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.555076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.555252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.555402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.555434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.555649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.678 [2024-11-20 17:21:47.555681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.678 qpair failed and we were unable to recover it. 00:27:29.678 [2024-11-20 17:21:47.555793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.555825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.556930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.556962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.557074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.557106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.557467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.557537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.557729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.557767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.558057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.558110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.558359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.558406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.558677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.558715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.558987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.559019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.559300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.559334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.559584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.559624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.559873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.559904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.560101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.560146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.560360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.560579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.560619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.560808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.560844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.561038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.561075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.561192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.561234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.561427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.561459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.561672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.561706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.561899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.561932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.679 qpair failed and we were unable to recover it. 00:27:29.679 [2024-11-20 17:21:47.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.679 [2024-11-20 17:21:47.562138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.562313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.562353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.562476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.562506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.562621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.562653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.562834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.562866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.563045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.563082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.563253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.563287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.563495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.563527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.563781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.563815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.564081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.564281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.564313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.564435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.564467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.564733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.564766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.564968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.564999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.565165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.565351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.565385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.565495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.565528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.565720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.565754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.565947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.565979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.566214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.566260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.566410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.566452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.566725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.566767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.566994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.567036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.567229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.567265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.567462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.567495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.567742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.567773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.680 qpair failed and we were unable to recover it. 00:27:29.680 [2024-11-20 17:21:47.567961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.680 [2024-11-20 17:21:47.567993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.568177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.568221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.568484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.568517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.568708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.568741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.568861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.568895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.569137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.569171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.569360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.569498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.569531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.569712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.569745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.569954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.569993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.570179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.570240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.570412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.570443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.570618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.570652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.570781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.570814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.571092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.571287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.571322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.571449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.571486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.571693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.571726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.571914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.571954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.572227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.572261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.572453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.572486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.681 [2024-11-20 17:21:47.572703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.681 [2024-11-20 17:21:47.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.681 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.572845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.572877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.573012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.573045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.573244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.573278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.573517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.573550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.573736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.573769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.573953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.573985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.574161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.574194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.574324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.574356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.574547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.574580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.574847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.574879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.575040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.575157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.575192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.575399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.575433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.575609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.575642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.575883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.575952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.576140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2af0 is same with the state(6) to be set 00:27:29.682 [2024-11-20 17:21:47.576536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.576610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.576831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.576868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.577124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.577158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.577385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.577421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.577622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.577656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.577921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.578088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.578120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.578314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.578347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.578466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.578498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.578691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.578723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.682 [2024-11-20 17:21:47.578898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.682 [2024-11-20 17:21:47.578931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.682 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.579120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.579154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.579412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.579467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.579676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.579725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.579857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.579895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.580030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.580063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.580198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.580242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.580503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.580535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.580723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.580755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.580948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.580981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.581164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.581196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.581521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.581553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.581798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.581830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.582040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.582072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.582340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.582374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.582597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.582732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.582765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.582993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.583180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.583345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.583377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.583586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.583619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.583865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.584127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.584161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.584282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.584316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.584581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.584615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.584734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.584766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.683 [2024-11-20 17:21:47.584882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.683 [2024-11-20 17:21:47.584916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.683 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.585182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.585223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.585419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.585451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.585650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.585684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.585877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.585911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.586228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.586263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.586444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.586475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.586737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.586771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.586957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.586989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.587177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.587218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.587429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.587462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.587725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.587758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.587884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.587916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.588030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.588333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.588367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.588487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.588519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.588709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.588771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.589024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.589095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.589344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.589393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.589603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.589640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.589885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.589919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.590159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.590191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.590429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.590628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.590659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.590894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.590927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.591167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.591208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.684 [2024-11-20 17:21:47.591398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.684 [2024-11-20 17:21:47.591430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.684 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.591546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.591578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.591788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.591819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.591944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.591976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.592156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.592188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.592333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.592366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.592556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.592588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.592782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.592813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.593033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.593180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.593405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.593556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.593768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.593968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.594001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.594253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.594288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.594404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.594438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.594678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.594711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.594965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.594998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.595200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.595241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.595384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.595418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.595641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.595674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.595862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.595896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.596070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.596103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.596394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.596427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.596713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.596745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.596930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.596963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.597093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.597125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.597315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.597348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.597474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.685 [2024-11-20 17:21:47.597507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.685 qpair failed and we were unable to recover it. 00:27:29.685 [2024-11-20 17:21:47.597682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.597713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.597838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.598004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.598276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.598311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.598481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.598515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.598712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.598746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.598854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.598885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.599074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.599306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.599339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.599651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.599826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.599857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.600069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.600102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.600336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.600370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.600576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.600836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.600869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.601066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.601100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.601363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.601397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.601684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.601716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.601991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.602023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.602148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.602181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.602498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.602532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.602708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.602739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.602943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.602974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.603236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.603271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.603510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.603543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.603731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.603763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.603893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.603926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.604103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.604136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.686 qpair failed and we were unable to recover it. 00:27:29.686 [2024-11-20 17:21:47.604415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.686 [2024-11-20 17:21:47.604449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.604664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.604696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.604887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.604921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.605101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.605132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.605317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.605351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.605545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.605579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.605710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.605742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.605870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.605901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.606898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.606936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.607120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.607152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.607335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.607370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.607606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.607639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.607924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.607957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.608224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.608259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.608452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.608662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.608693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.608855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.608888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.609064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.609096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.609225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.687 [2024-11-20 17:21:47.609259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.687 qpair failed and we were unable to recover it. 00:27:29.687 [2024-11-20 17:21:47.609389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.609421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.609603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.609636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.609880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.609912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.610102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.610135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.610312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.610347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.610553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.610587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.610729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.610762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.610933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.610966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.611095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.611128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.611273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.611446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.611478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.611657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.611690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.611961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.611992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.612173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.612230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.612512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.612645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.612676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.612915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.612949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.613086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.613119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.613302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.613336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.613523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.613555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.613800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.613833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.613958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.613990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.614118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.614149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.614265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.614301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.614544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.614576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.614700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.614732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.614856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.614889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.615081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.688 qpair failed and we were unable to recover it. 00:27:29.688 [2024-11-20 17:21:47.615235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.688 [2024-11-20 17:21:47.615269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.615458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.615495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.615638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.615670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.615780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.615812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.616096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.616127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.616260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.616293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.616499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.616533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.616654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.616687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.616877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.616909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.617091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.617124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.617342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.617376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.617562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.617594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.617795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.617827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.617935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.617966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.618258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.618313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.618535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.618585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.618753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.618798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.618947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.618993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.619269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.619317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.619541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.619590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.619791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.619825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.620856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.620887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.621099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.621132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.689 [2024-11-20 17:21:47.621291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.689 [2024-11-20 17:21:47.621327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.689 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.621509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.621540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.621718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.621750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.621861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.621891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.622067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.622307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.622341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.622452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.622655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.622918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.622950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.623121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.623152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.623337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.623370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.623544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.623576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.623695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.623727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.623928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.623965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.624172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.624215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.624322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.624354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.624561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.624592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.624873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.624905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.625143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.625175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.625378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.625410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.625623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.625655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.625811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.625929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.625960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.626155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.626186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.626452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.626485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.626607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.626639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.626907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.626937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.627117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.627149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.627454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.690 [2024-11-20 17:21:47.627492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.690 qpair failed and we were unable to recover it. 00:27:29.690 [2024-11-20 17:21:47.627688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.627720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.627909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.627940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.628221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.628256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.628494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.628526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.628642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.628673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.628793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.628824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.629056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.629176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.629221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.629414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.629446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.629687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.629719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.629894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.629925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.630132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.630164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.630421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.630454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.630645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.630676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.630781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.630813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.630999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.631371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.631523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.631667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.631892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.631923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.632065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.632096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.632381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.632415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.632593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.632625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.632748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.632791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.633034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.633066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.633200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.633246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.633441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.633472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.633678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.633860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.633892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.691 [2024-11-20 17:21:47.634065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.691 [2024-11-20 17:21:47.634096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.691 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.634358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.634392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.634525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.634558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.634692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.634851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.634882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.635063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.635095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.635270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.635395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.635427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.635680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.635712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.635827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.635858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.636032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.636063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.636197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.636244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.636497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.636739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.636770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.636980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.637011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.637218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.637468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.637501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.637698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.637729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.637911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.637943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.638121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.638153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.638378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.638411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.638631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.638662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.638832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.638864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.638999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.639031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.639151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.639182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.639344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.639377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.639560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.639592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.639774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.639805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.639999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.640030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.640227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.640262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.640549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.692 [2024-11-20 17:21:47.640582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-11-20 17:21:47.640795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.640826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.641029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.641292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.641325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.641561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.641750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.641782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.641973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.642004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.642219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.642252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.642385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.642416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.642539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.642571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.642782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.642814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.643104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.643135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.643331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.643365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.643499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.643531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.643645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.643677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.643806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.643836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.644056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.644086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.644255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.644290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.644568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.644600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.644817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.644849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.645067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.645099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.645292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.645325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.645465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.645809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.645841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.646080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.646261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.646295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.646532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.646564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.646753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.646784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-11-20 17:21:47.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.693 [2024-11-20 17:21:47.647012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.647191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.647238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.647351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.647381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.647573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.647605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.647821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.647853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.647979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.648011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.648217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.648250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.648438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.648469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.648647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.648679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.648892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.648924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.649917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.649949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.650126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.650163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.650344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.650377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.650484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.650516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.650696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.650728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.650905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.650936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.651180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.651237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.651364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.651395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.651574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.651605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.651729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.651760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.651929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.651960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.652142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.652173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.652364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.652396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.652614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.652741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.652772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.652891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.652923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.653044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.653075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.653240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.694 [2024-11-20 17:21:47.653273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-11-20 17:21:47.653391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.653422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.653600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.653631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.653824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.653856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.654037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.654069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.654248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.654281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.654515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.654547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.654720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.654751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.654926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.654957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.655168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.655200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.655446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.655575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.655607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.655861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.655992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.656024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.656132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.656163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.656425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.656457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.656634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.656665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.656833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.656863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.657074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.657104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.657286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.657319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.657443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.657474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.657658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.657689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.657953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.657984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.658174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.658213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.658391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.658428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.695 [2024-11-20 17:21:47.658545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.695 [2024-11-20 17:21:47.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.695 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.658691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.658723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.658841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.658872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.658996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.659222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.659379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.659588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.659734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.659937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.659968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.660092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.660123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.660256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.660476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.660507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.660698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.660730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.660911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.660942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.661124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.661156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.661277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.661328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.661455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.661487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.661731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.661763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.661895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.661927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.662116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.662146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.662348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.662381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.662551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.662582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.662829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.662860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.663032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.663063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.663187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.663228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.663498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.988 [2024-11-20 17:21:47.663529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-11-20 17:21:47.663726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.663759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.663950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.664870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.664901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.665027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.665058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.665342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.665375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.665551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.665582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.665767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.665798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.665916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.665947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.666081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.666118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.666242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.666275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.666487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.666673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.666705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.666882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.666913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.667122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.667493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.667635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.667817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.667990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.668022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.668226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.668259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.668432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.668463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.668701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.668732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.668873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.668905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.989 [2024-11-20 17:21:47.669918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.989 [2024-11-20 17:21:47.669950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.989 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.670073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.670103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.670221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.670252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.670367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.670399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.670686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.670718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.670849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.670880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.671087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.671119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.671240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.671274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.671398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.671430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.671621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.671653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.671837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.671869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.672085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.672255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.672480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.672651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.672805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.672981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.673136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.673352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.673519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.673726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.673952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.673983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.674134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.674282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.674499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.674654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.674812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.674994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.675025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.675252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.675453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.675485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.675662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.675693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.675806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.675837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.675970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.676880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.990 qpair failed and we were unable to recover it. 00:27:29.990 [2024-11-20 17:21:47.676999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.990 [2024-11-20 17:21:47.677030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.677139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.677170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.677426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.677500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.677789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.677825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.677970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.678003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.678133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.678165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.678291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.678323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.678590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.678622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.678839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.678871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.679053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.679264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.679548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.679689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.679862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.679983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.680015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.680251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.680285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.680396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.680426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.680685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.680718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.680845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.680875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.681138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.681170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.681371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.681407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.681547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.681579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.681823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.681855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.681997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.682029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.682190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.682251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.682384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.682414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.682581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.682613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.682751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.682782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.682973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.683004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.683249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.683283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.683473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.683504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.683631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.683663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.683778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.683809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.683993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.684024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.684298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.684408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.684439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.684688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.684759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.685049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.991 [2024-11-20 17:21:47.685084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.991 qpair failed and we were unable to recover it. 00:27:29.991 [2024-11-20 17:21:47.685220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.685254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.685390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.685420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.685551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.685583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.685774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.685804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.685985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.686231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.686377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.686521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.686737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.686893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.686924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.687071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.687104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.687370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.687415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.687521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.687552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.687671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.687702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.688219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.688455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.688612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.688757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.688911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.688942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.689061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.689092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.689269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.689301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.689481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.689513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.689642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.689674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.689911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.689943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.690132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.690165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.690355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.690388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.690503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.690534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.690723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.690758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.690935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.690966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.691140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.691172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.691401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.691437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.691613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.691645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.691780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.691811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.691943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.691975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.992 qpair failed and we were unable to recover it. 00:27:29.992 [2024-11-20 17:21:47.692155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.992 [2024-11-20 17:21:47.692187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.692493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.692526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.692710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.692742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.693007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.693286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.693319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.693439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.693471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.693595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.693626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.693802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.693833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.694094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.694126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.694272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.694459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.694490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.694634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.694666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.694857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.694888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.695163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.695194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.695368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.695402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.695611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.695644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.695778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.695811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.695954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.695988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.696134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.696323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.696357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.696552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.696585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.696777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.696809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.696946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.696979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.697160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.697192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.697331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.697363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.697574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.697606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.697788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.697820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.698091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.698122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.698364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.698397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.698513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.698761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.698825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.698978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.699009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.699131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.699161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.699290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.993 [2024-11-20 17:21:47.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.993 qpair failed and we were unable to recover it. 00:27:29.993 [2024-11-20 17:21:47.699449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.699479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.699592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.699621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.699856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.700039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.700194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.700447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.700586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.700734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.700977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.701125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.701303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.701459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.701673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.701938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.701971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.702077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.702109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.702301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.702334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.702465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.702496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.702689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.702721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.702935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.702966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.703919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.703951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.704129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.704160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.704348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.704380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.704579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.704611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.704723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.704754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.704930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.704960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.705079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.705112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.705294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.705328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.705449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.705482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.705607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.705639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.705815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.705847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.706026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.706058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.706183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.706226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.706336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.706367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.994 qpair failed and we were unable to recover it. 00:27:29.994 [2024-11-20 17:21:47.706556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.994 [2024-11-20 17:21:47.706588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.706869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.706900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.707094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.707126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.707313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.707347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.707520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.707728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.707759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.707882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.707913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.708035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.708067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.708192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.708233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.708471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.708503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.708744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.708778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.708896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.708933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.709090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.709329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.709475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.709647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.709855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.709969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.710104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.710325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.710745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.710963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.710995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.711273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.711458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.711490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.711619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.711650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.711920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.712125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.712156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.712313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.712346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.712587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.712619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.712810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.712841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.713017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.713048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.713249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.713282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.713484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.713515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.713764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.713795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.714006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.714036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.714150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.714182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.714374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.714405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.714689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.714762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.715882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.715914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.716131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.716163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.716348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.716384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.716583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.716617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.716737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.716769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.716938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.716971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.995 [2024-11-20 17:21:47.717145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.995 [2024-11-20 17:21:47.717179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.995 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.717382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.717415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.717564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.717596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.717799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.717831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.718122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.718154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.718445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.718479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.718691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.718722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.718925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.719099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.719132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.719249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.719283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.719419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.719452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.719759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.719793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.719978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.720012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.720192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.720239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.720351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.720588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.720627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.720734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.721013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.721046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.721334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.721370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.721500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.721532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.721706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.721739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.721911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.721942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.722115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.722147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.722396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.722553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.722585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.722760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.722793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.722997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.723029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.723162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.723193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.723443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.723476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.723671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.723701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.723925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.723956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.724083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.724115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.724245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.724278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.724457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.724489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.724729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.724760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.724945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.724976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.725114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.725148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.725330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.725577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.725610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.725739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.725770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.725886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.725919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.726128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.726160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.726354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.726394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.726659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.726691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.726827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.726858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.726999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.727032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.727179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.727453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.996 [2024-11-20 17:21:47.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.996 qpair failed and we were unable to recover it. 00:27:29.996 [2024-11-20 17:21:47.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.727759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.727999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.728031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.728282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.728315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.728500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.728532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.728725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.728758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.728890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.728921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.729094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.729126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.729260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.729293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.729434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.729467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.729639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.729675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.729848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.729881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.730007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.730038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.730219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.730253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.730427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.730459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.730586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.730618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.730797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.730828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.731006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.731036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.731214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.731248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.731436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.731469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.731645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.731678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.731892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.731925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.732170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.732218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.732414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.732447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.732567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.732844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.732878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.733087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.733120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.733245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.733279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.733466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.733499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.733689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.733897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.733928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.734142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.734173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.734372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.734404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.734632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.734666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.734917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.734951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.735076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.735109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.735324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.735358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.735535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.735567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.735689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.735722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.735987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.736019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.736140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.736172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.736349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.736383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.736650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.736682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.736872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.736904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.737174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.737231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.737531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.997 [2024-11-20 17:21:47.737563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.997 qpair failed and we were unable to recover it. 00:27:29.997 [2024-11-20 17:21:47.737751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.737784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.737995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.738028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.738216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.738250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.738431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.738465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.738742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.738775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.739070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.739200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.739245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.739531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.739562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.739820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.739854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.740096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.740130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.740255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.740289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.740498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.740530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.740792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.740825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.741040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.741072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.741192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.741232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.741500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.741532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.741717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.741748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.741883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.741914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.742035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.742066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.742274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.742307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.742519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.742550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.742743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.742774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.742909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.743140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.743172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.743361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.743394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.743611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.743643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.743779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.743810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.744089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.744121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.744245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.744277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.744525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.744557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.744680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.744711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.744832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.744863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.745072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.745104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.745244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.745276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.745487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.745519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.745711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.745743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.745871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.745902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.998 [2024-11-20 17:21:47.746003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.998 [2024-11-20 17:21:47.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.998 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.746242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.746276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.746404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.746436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.746553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.746583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.746764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.746796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.746986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.747017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.747195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.747238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.747479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.747675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.747706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.747878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.747909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.748924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.748955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.749135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.749165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.749361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.749394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.749506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.749537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.749654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.749685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.749875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.749906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.750084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.750116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.750235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.750268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.750448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.750479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.750735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.750767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.750885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.750915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.751955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.751986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.752212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.752244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.752438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.752471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.752670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.752707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.752885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.752916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.753106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.753137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.753350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.753383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:29.999 qpair failed and we were unable to recover it. 00:27:29.999 [2024-11-20 17:21:47.753503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.999 [2024-11-20 17:21:47.753533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.753714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.753745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.753985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.754122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.754345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.754565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.754720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.754915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.754947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.755142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.755172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.755295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.755327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.755452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.755484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.755670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.755701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.755887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.755918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.756220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.756254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.756456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.756488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.756746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.756778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.756942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.756973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.757091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.757121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.757244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.757277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.757398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.757428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.757669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.757883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.757914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.758958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.758989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.759333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.759484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.759717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.759878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.759987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.760017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.760137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.760169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.760296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.760327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.760437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.000 [2024-11-20 17:21:47.760468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.000 qpair failed and we were unable to recover it. 00:27:30.000 [2024-11-20 17:21:47.760635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.760708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.760925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.760961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.761162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.761194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.761395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.761428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.761541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.761574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.761754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.761785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.762846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.762877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.763004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.763035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.763152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.763191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.763499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.763532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.763711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.763742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.763864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.763895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.764089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.764120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.764245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.764278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.764412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.764443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.764639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.764670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.764844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.764876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.765042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.765073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.765264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.765296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.765471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.765504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.765700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.765731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.765836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.765866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.766047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.766079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.766192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.766241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.766380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.766422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.766617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.766652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.766902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.766934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.001 [2024-11-20 17:21:47.767051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.001 [2024-11-20 17:21:47.767082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.001 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.767187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.767224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.767430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.767461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.767638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.767669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.767788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.767820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.767923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.767955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.768109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.768327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.768486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.768662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.768875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.768994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.769024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.769196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.769242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.769470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.769502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.769635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.769665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.769861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.769894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.770116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.770147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.770291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.770323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.770507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.770537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.770666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.770698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.770902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.770934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.771951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.771983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.772159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.772190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.772327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.772358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.772531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.772563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.772745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.772778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.772917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.772947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.773130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.773162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.773289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.773321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.773503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.773535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.773649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.773686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.773897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.774027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.774059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.002 [2024-11-20 17:21:47.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.002 [2024-11-20 17:21:47.774272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.002 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.774465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.774496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.774611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.774641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.774835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.774867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.775056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.775088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.775321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.775354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.775525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.775557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.775757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.775789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.775978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.776007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.776200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.776241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.776414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.776445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.776572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.776604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.776792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.776823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.777800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.777829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.778066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.778249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.778392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.778686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.778831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.778977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.779210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.779428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.779633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.779766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.779909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.779940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.780072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.780102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.780279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.780313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.780513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.780544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.780647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.780678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.780814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.780846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.781025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.781057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.781164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.781194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.781399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.781432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.003 qpair failed and we were unable to recover it. 00:27:30.003 [2024-11-20 17:21:47.781595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.003 [2024-11-20 17:21:47.781667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.781801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.781837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.782012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.782045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.782249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.782283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.782411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.782442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.782656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.782688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.782861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.782892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.783053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.783230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.783444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.783646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.783861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.783976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.784131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.784308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.784513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.784728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.784929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.784962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.785075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.785107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.785282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.785315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.785422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.785454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.785623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.785654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.785867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.785898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.786914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.786947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.787941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.787973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.788161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.788191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.788308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.788340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.788520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.004 [2024-11-20 17:21:47.788551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.004 qpair failed and we were unable to recover it. 00:27:30.004 [2024-11-20 17:21:47.788736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.788767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.788893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.788924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.789098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.789169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.789398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.789434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.789626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.789658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.789832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.789864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.789986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.790121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.790499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.790644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.790851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.790883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.791065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.791096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.791302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.791335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.791517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.791548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.791843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.791874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.792050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.792081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.792278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.792310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.792517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.792549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.792733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.792764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.792947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.792977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.793956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.793986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.794093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.794123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.794334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.794582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.794614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.794717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.794748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.794872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.794902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.795021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.795051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.795246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.795278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.795476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.795506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.795750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.795781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.005 qpair failed and we were unable to recover it. 00:27:30.005 [2024-11-20 17:21:47.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.005 [2024-11-20 17:21:47.796002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.796246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.796453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.796484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.796620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.796651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.796781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.796812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.797023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.797054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.797224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.797491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.797522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.797778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.797809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.797921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.797952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.798124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.798155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.798336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.798368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.798566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.798597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.798728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.798759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.798956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.798987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.799169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.799200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.799328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.799359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.799474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.799505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.799631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.799662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.799856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.799887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.800080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.800111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.800312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.800344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.800458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.800489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.800699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.800731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.800928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.800959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.801132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.801163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.801307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.801487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.801518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.801697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.801729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.801929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.006 [2024-11-20 17:21:47.801960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.006 qpair failed and we were unable to recover it. 00:27:30.006 [2024-11-20 17:21:47.802160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.802190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.802335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.802367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.802505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.802536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.802731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.802762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.802940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.802971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.803913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.803945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.804131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.804162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.804280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.804312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.804447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.804478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.804742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.804773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.804894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.804931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.805059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.805089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.805268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.805301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.805448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.805479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.805657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.805878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.805909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.806025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.806056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.806171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.806229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.806413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.806445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.806563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.806593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.806727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.806759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.807012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.807043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.807236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.807268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.807562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.807593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.807785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.807817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.808092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.808356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.808388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.808562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.808593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.808785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.808816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.007 [2024-11-20 17:21:47.808951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.007 qpair failed and we were unable to recover it. 00:27:30.007 [2024-11-20 17:21:47.809079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.809111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.809300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.809333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.809598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.809630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.809757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.809788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.809905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.809935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.810066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.810098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.810217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.810250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.810379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.810411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.810525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.810557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.810826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.810858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.811035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.811066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.811269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.811301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.811485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.811516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.811651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.811681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.811804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.811835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.812950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.812987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.813181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.813225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.813402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.813434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.813560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.813591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.813785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.813816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.813930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.813961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.814065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.814096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.814246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.814279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.814459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.814490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.814664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.814696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.814878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.814908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.815191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.815231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.815411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.008 [2024-11-20 17:21:47.815442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.008 qpair failed and we were unable to recover it. 00:27:30.008 [2024-11-20 17:21:47.815547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.815578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.815705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.815737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.815846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.815878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.815984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.816119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.816328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.816465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.816684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.816844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.817071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.817102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.817240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.817273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.817464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.817494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.817616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.817649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.817847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.817879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.818021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.818052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.818179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.818221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.818408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.818440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.818683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.818714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.818914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.818945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.819122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.819153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.819346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.819378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.819567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.819598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.819775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.819806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.819924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.819955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.820062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.820093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.820275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.820308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.820412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.820443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.820625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.820662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.820869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.820899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.821027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.821059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.821258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.821290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.821470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.821501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.821694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.821725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.821847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.821877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.822068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.822099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.822240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.822272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.822467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.822498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.009 [2024-11-20 17:21:47.822707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.009 [2024-11-20 17:21:47.822738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.009 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.822865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.822896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.823068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.823099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.823273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.823491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.823523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.823762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.823793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.823978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.824185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.824404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.824539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.824757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.824901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.824931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.825113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.825144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.825349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.825382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.825555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.825586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.825704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.825735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.825932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.825963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.826080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.826111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.826291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.826324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.826498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.826530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.826710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.826741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.826996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.827132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.827416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.827570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.827716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.827967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.828083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.828113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.828290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.828324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.828432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.828463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.828727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.828764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.828884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.828915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.829027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.829059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.829245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.010 [2024-11-20 17:21:47.829278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.010 qpair failed and we were unable to recover it. 00:27:30.010 [2024-11-20 17:21:47.829452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.829482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.829675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.829707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.829994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.830025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.830266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.830299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.830435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.830465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.830675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.830707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.830829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.830860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.831170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.831200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.831323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.831353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.831525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.831556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.831702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.831832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.831863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.832936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.832966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.833185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.833356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.833494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.833639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.833842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.833997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.834028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.834272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.834478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.834509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.834698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.834729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.834899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.835049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.835080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.835218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.835252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.011 [2024-11-20 17:21:47.835430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.011 [2024-11-20 17:21:47.835461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.011 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.835641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.835672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.835847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.835878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.836091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.836305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.836453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.836605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.836805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.836986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.837017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.837259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.837292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.837517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.837629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.837836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.837867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.838962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.838993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.839164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.839196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.839411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.839443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.839650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.839682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.839791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.839822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.840013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.840045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.840287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.840320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.840515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.840547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.840723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.840754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.840858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.840889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.841153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.841185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.841378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.841410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.841672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.841704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.841900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.841932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.842063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.842094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.842297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.842330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.842452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.842484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.842682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.842713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.012 [2024-11-20 17:21:47.842899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-11-20 17:21:47.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.012 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.843125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.843157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.843288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.843321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.843510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.843541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.843723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.843755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.843860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.843891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.844016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.844048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.844236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.844269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.844461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.844493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.844612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.844644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.844830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.844867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.845063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.845094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.845226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.845258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.845503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.845534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.845776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.845808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.845926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.845957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.846091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.846122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.846258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.846291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.846423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.846455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.846575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.846606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.846781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.846813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.847049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.847218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.847358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.847584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.847788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.848175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.848411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.848562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.848806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.848955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-11-20 17:21:47.848986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-11-20 17:21:47.849249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.849283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.849408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.849439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.849554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.849585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.849828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.849859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.850043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.850075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.850265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.850298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.850401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.850433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.850677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.850708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.850835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.851060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.851091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.851282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.851315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.851503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.851534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.851668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.851700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.851918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.851950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.852147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.852178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.852307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.852340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.852530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.852561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.852670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.852701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.852836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.852874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.853065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.853096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.853211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.853244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.853429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.853460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.853706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.853738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.854943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.854975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.855165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.855196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.855396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.855428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.855606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.855638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.855842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.855873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.856072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.856104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.856278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.856310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.856434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.856465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.856736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-11-20 17:21:47.856923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-11-20 17:21:47.856955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.857195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.857234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.857363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.857395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.857569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.857601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.857708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.857738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.857913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.857945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.858122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.858154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.858303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.858336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.858581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.858654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.858947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.858983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.859244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.859281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.859408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.859441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.859618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.859650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.859780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.859811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.860075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.860105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.860282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.860317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.860529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.860561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.860730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.860762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.861024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.861056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.861245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.861280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.861492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.861524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.861628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.861852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.861885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.862131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.862163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.862358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.862393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.862566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.862599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.862794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.862824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.863029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.863226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.863259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.863478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.863643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.863674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.863811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.863842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.864095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.864128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.864394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.864612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.864645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.864831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.864867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.865080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.865112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-11-20 17:21:47.865550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.015 [2024-11-20 17:21:47.865582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.865778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.865810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.866101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.866272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.866498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.866666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.866802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.866976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.867007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.867269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.867303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.867607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.867639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.867825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.867856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.868004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.868036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.868229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.868262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.868465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.868497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.868694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.868858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.868889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.869200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.869258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.869458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.869490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.869595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.869639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.869829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.869998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.870036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.870253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.870286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.870551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.870583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.870822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.870854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.871035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.871334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.871505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.871739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.871888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.871992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.872233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.872387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.872418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.872602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.872633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.872752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.873026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.873057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.873185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.873225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.873406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.873437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.873558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.873595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.016 [2024-11-20 17:21:47.873730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.016 [2024-11-20 17:21:47.873762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.016 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.873946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.873977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.874189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.874230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.874345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.874377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.874564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.874596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.874711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.874741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.874936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.874967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.875157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.875187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.875469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.875776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.875807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.875929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.875960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.876158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.876189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.876440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.876471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.876595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.876627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.876734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.876765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.876890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.876921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.877095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.877126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.877302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.877335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.877598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.877629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.877893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.878198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.878389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.878420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.878592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.878623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.878837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.878869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.879017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.879047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.879182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.879234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.879450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.879483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.879664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.879695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.879808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.879839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.880067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.880099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.880344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.880550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.880581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.880696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.880728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.880910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.880941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.881135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.881167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.881328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.881361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.881535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.881566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.881752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.017 [2024-11-20 17:21:47.881783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.017 qpair failed and we were unable to recover it. 00:27:30.017 [2024-11-20 17:21:47.881904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.881936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.882146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.882414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.882446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.882689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.882720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.882987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.883018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.883241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.883481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.883512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.883694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.883725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.883848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.883879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.883991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.884021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.884147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.884178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.884324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.884356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.884540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.884571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.884691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.884722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.884989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.885020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.885126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.885157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.885363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.885396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.885689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.885721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.885920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.885951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.886168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.886199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.886344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.886375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.886513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.886544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.886715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.886984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.887015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.887188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.887244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.887451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.887482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.887670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.887701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.887851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.888060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.888092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.888356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.888388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.018 [2024-11-20 17:21:47.888583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.018 [2024-11-20 17:21:47.888613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.018 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.888755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.888786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.888975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.889006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.889248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.889281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.889406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.889437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.889563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.889594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.889773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.889804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.890013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.890044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.890237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.890269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.890490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.890613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.890644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.890842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.890878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.891057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.891088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.891260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.891293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.891412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.891444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.891698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.891849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.891880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.892052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.892284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.892317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.892567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.892715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.892746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.892962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.892993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.893153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.893274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.893307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.893488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.893519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.893727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.893758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.893876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.893907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.894149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.894181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.894361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.894393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.894659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.894689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.894863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.894894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.895002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.895238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.895270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.895457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.895489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.895659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.895690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.895879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.895910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.896096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.896127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.896311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.896344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.019 qpair failed and we were unable to recover it. 00:27:30.019 [2024-11-20 17:21:47.896592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.019 [2024-11-20 17:21:47.896624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.896827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.897022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.897053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.897364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.897538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.897569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.897784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.897815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.898005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.898036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.898246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.898277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.898450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.898481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.898668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.898890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.898920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.899099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.899130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.899258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.899291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.899429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.899465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.899669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.899701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.899876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.899907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.900083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.900115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.900386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.900420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.900544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.900575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.900696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.900727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.900913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.900945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.901069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.901101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.901289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.901322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.901510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.901541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.901759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.901933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.901964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.902159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.902190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.902339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.902371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.902550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.902582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.902707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.902738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.902852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.902882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.903082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.903376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.903409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.903626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.903657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.903853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.903884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.904066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.020 [2024-11-20 17:21:47.904098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.020 qpair failed and we were unable to recover it. 00:27:30.020 [2024-11-20 17:21:47.904223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.904256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.904457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.904488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.904656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.904688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.904858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.904889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.905084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.905115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.905223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.905256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.905358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.905389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.905558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.905808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.905840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.906013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.906043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.906287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.906320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.906495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.906527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.906767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.906798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.906988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.907133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.907164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.907312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.907344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.907584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.907616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.907879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.907916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.908104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.908253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.908285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.908463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.908495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.908739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.908902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.908933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.909122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.909153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.909345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.909377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.909660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.909691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.909829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.909860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.910051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.910082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.910189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.910230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.910353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.910383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.910571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.910602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.910801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.910832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.911039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.911171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.911363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.911395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.911591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.021 [2024-11-20 17:21:47.911622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.021 qpair failed and we were unable to recover it. 00:27:30.021 [2024-11-20 17:21:47.911743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.911774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.911959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.911990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.912174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.912230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.912511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.912542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.912717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.912748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.912937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.912968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.913158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.913189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.913321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.913353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.913493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.913525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.913698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.913729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.913982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.914013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.914252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.914285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.914402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.914433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.914621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.914652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.914913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.914945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.915130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.915160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.915286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.915318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.915526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.915558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.915681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.915711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.915896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.915927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.916065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.916096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.916351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.916389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.916584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.916615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.916786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.916817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.917000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.917031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.917274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.917428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.917460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.917643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.917674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.917858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.917889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.918099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.918129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.918271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.918304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.918507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.918538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.918804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.918835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.919126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.919297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.919441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.919589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.022 [2024-11-20 17:21:47.919747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.022 qpair failed and we were unable to recover it. 00:27:30.022 [2024-11-20 17:21:47.919921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.919951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.920073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.920104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.920234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.920266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.920536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.920567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.920828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.920860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.920989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.921021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.921254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.921379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.921410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.921599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.921631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.921832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.921864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.921983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.922015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.922213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.922246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.922523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.922734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.922765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.922888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.922920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.923278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.923428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.923635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.923801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.923976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.924006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.924175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.924234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.924369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.924400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.924611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.924648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.924898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.925953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.925983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.926158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.926189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.926392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.023 [2024-11-20 17:21:47.926424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.023 qpair failed and we were unable to recover it. 00:27:30.023 [2024-11-20 17:21:47.926663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.926695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.926823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.926854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.926988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.927019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.927227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.927458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.927490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.927729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.927760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.927977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.928153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.928393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.928565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.928799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.928952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.928984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.929190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.929231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.929406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.929437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.929615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.929646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.929828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.929859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.930126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.930158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.930358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.930391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.930582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.930613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.931966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.931998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.932186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.932226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.932410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.932624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.932655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.932893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.932923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.933189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.933234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.933481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.933513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.933704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.933735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.933907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.933938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.934117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.934148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.934271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.934303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.934486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.934517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.024 [2024-11-20 17:21:47.934695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.024 [2024-11-20 17:21:47.934726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.024 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.934831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.934869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.935043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.935075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.935259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.935421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.935452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.935654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.935685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.935943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.935975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.936175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.936232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.936352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.936384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.936567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.936599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.936789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.936823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.937062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.937291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.937326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.937517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.937551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.937734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.937767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.937969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.938002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.938240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.938272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.938408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.938440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.938568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.938599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.938786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.938818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.939076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.939150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.939374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.939410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.939596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.939629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.939840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.940029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.940061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.940361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.940482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.940515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.940704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.940735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.940977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.941009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.941270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.941481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.941513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.941697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.941729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.941848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.941882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.942075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.942116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.942360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.942392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.942579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.942612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.942802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.942833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.025 [2024-11-20 17:21:47.943085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.025 [2024-11-20 17:21:47.943118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.025 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.943357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.943390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.943630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.943661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.943844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.943876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.944069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.944100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.944226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.944258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.944382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.944413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.944613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.944645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.944860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.945058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.945089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.945306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.945339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.945451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.945792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.945825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.946050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.946081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.946266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.946299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.946420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.946451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.946662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.946693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.946958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.946988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.947089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.947121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.947259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.947292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.947477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.947508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.947688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.947720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.947902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.947933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.948146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.948337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.948369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.948496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.948527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.948653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.948684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.948875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.948905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.949090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.949122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.949268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.949458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.949660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.949691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.949893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.950112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.950145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.950278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.950312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.950440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.026 [2024-11-20 17:21:47.950473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.026 qpair failed and we were unable to recover it. 00:27:30.026 [2024-11-20 17:21:47.950715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.950752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.950932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.950963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.951227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.951260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.951402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.951435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.951559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.951591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.951834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.951864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.952002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.952034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.952296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.952328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.952464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.952494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.952686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.952717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.952837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.952869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.953095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.953125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.953314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.953348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.953541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.953572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.953699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.953730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.953994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.954225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.954387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.954533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.954667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.954896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.954927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.955101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.955133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.955248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.955282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.955404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.955434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.955627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.955659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.955785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.955816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.027 [2024-11-20 17:21:47.956076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.027 [2024-11-20 17:21:47.956107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.027 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.956237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.956270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.956380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.956412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.956650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.956681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.956816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.956848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.957087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.957117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.957241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.957275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.957400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.957431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.957634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.957667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.957839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.957870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.958040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.958197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.958413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.958636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.958855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.958975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.959007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.959185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.959224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.959444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.959749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.959781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.959901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.959931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.960279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.960466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.028 [2024-11-20 17:21:47.960497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.028 qpair failed and we were unable to recover it. 00:27:30.028 [2024-11-20 17:21:47.960738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.960770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.960886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.960916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.961107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.961139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.961259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.961291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.961495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.961526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.961723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.961755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.961945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.961978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.962086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.962119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.962326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.962359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.962556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.962586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.962790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.962822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.963060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.963091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.963293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.963325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.963438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.963469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.963640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.963671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.963792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.963823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.964039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.964070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.964210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.964243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.964418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.964449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.964641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.964673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.964788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.964820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.965085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.965116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.965238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.965271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.965393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.965426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.965598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.965628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.966066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.029 [2024-11-20 17:21:47.966097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.029 qpair failed and we were unable to recover it. 00:27:30.029 [2024-11-20 17:21:47.966361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.966394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.966528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.966559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.966752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.966785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.966974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.967007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.967113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.967146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.967339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.967378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.967557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.967588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.967828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.967860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.967978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.968010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.968182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.968235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.968475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.968506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.968615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.968646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.968810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.969005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.969036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.969154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.969186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.969356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.969597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.969629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.969846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.970067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.970100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.970284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.970318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.970498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.970530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.970770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.970801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.970984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.971016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.971188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.971230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.971435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.971468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.971659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.971693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.971825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.030 [2024-11-20 17:21:47.971858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.030 qpair failed and we were unable to recover it. 00:27:30.030 [2024-11-20 17:21:47.971974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.972005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.972271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.972304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.972441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.972473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.972719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.972751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.972875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.972905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.973086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.973119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.973329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.973363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.973547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.973577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.973680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.973713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.973956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.973988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.974181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.974221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.974468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.974500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.974697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.974728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.974851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.974882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.975077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.975110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.975307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.975342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.975453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.975485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.975676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.975707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.975967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.976003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.976255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.976288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.976486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.976516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.976652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.976686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.976925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.976958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.977232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.977264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.977382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.031 [2024-11-20 17:21:47.977412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.031 qpair failed and we were unable to recover it. 00:27:30.031 [2024-11-20 17:21:47.977605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.977637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.977907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.977939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.978122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.978154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.978366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.978659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.978691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.978864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.978894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.979080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.979111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.979359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.979393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.979664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.979695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.979872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.979905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.980084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.980114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.980321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.980355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.980462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.980495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.980610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.980642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.980833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.980864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.981066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.981098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.981236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.981271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.981391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.981537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.981569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.981762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.981796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.982037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.982075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.982277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.982311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.982484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.982515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.982707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.982738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.983006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.983037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.983276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.983309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.032 [2024-11-20 17:21:47.983547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.032 [2024-11-20 17:21:47.983579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.032 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.983705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.983737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.983927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.983958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.984141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.984352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.984423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.984637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.984672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.984885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.984919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.985132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.985165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.985480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.985514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.985784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.985817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.985944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.985977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.986226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.986261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.986529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.986562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.986676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.986895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.986927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.987183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.987418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.987451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.987588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.987620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.987796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.987828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.988006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.988038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.988180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.988230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.988443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.988476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.988610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.988640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.988862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.989055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.989088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.989223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.989256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.989445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.033 [2024-11-20 17:21:47.989478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.033 qpair failed and we were unable to recover it. 00:27:30.033 [2024-11-20 17:21:47.989741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.989774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.990054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.990101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.990383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.990432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.990677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.990709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.990948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.990979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.991243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.991275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.991457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.991490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.991666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.991713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.991858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.991889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.992018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.992050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.992161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.992383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.992430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.992634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.992673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.992886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.992919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.993103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.993133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.993252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.993286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.993473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.993777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.993969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.994001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.994109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.994140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.994387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.994429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.034 [2024-11-20 17:21:47.994713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.034 [2024-11-20 17:21:47.994757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.034 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.994879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.994913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.995113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.995144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.995335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.995369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.995652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.995685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.995793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.995824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.995933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.995964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.996246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.996421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.996453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.996660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.996691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.996894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.997058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.997252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.997388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.997420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.997638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.997670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.997859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.997893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.998067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.998098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.998226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.998259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.998435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.998466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.998663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.998694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.998912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.998944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.999153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.999185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.999314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.999347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.999464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.999496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.999681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.999715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:47.999895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:47.999927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.000119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.000157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.000370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.000404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.000578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.000609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.000749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.000781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.000979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.001012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.001284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.001317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.001427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.317 [2024-11-20 17:21:48.001459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.317 qpair failed and we were unable to recover it. 00:27:30.317 [2024-11-20 17:21:48.001633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.001665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.001788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.001819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.001924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.001956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.002176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.002322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.002478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.002861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.002969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.003000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.003227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.003497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.003530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.003769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.003801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.003993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.004237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.004401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.004576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.004772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.004949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.005074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.005310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.005527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.005558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.005674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.005707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.005881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.005915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.006019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.006052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.006161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.006371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.006403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.006573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.006604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.006803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.006834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.007073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.007106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.007299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.007333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.007519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.007550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.007757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.007892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.008084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.008121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.008306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.008338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.008454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.008485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.008673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.008705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.318 [2024-11-20 17:21:48.008879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.318 [2024-11-20 17:21:48.008909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.318 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.009041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.009073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.009338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.009373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.009546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.009576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.009702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.009734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.009930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.009962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.010138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.010169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.010428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.010462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.010619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.010747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.010885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.010917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.011117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.011148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.011337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.011369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.011539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.011570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.011757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.011788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.011907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.011937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.012055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.012085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.012325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.012359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.012622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.012654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.012776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.012807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.012999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.013030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.013149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.013180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.013444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.013477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.013660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.013691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.013894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.013926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.014032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.014063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.014321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.014477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.014508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.014683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.014715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.014853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.014884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.015059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.015091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.015281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.015316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.015444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.015475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.015670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.015702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.015820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.015853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.016029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.016060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.016324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.016365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.016486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.016519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.016722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.319 [2024-11-20 17:21:48.016755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.319 qpair failed and we were unable to recover it. 00:27:30.319 [2024-11-20 17:21:48.017053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.017283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.017317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.017457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.017732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.017893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.017924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.018094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.018126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.018400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.018433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.018694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.018727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.018853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.018887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.019062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.019101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.019210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.019243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.019424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.019457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.019659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.019690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.019940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.020181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.020232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.020407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.020438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.020571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.020603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.020876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.020909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.021179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.021524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.021773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.021991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.022023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.022263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.022299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.022500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.022532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.022665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.022698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.022805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.022837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.022976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.023007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.023212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.023245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.023455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.023489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.023679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.023713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.023993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.024025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.024158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.024190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.024303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.024521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.024553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.024743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.024774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.025039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.025073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.025248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.025281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.025404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.320 [2024-11-20 17:21:48.025442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.320 qpair failed and we were unable to recover it. 00:27:30.320 [2024-11-20 17:21:48.025626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.025658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.025838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.025870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.026056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.026088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.026277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.026312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.026477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.026509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.026688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.026721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.026919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.026949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.027141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.027173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.027400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.027438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.027571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.027602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.027866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.027900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.028087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.028121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.028423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.028604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.028636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.028841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.028873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.029086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.029119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.029241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.029275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.029516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.029549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.029757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.029790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.029967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.030000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.030120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.030153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.030357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.030390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.030611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.030644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.030887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.030918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.031098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.031131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.031336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.031371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.031493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.031526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.031791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.031824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.032013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.032045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.032221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.032255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.032463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.321 [2024-11-20 17:21:48.032496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.321 qpair failed and we were unable to recover it. 00:27:30.321 [2024-11-20 17:21:48.032616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.032650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.032792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.032825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.033837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.033868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.034135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.034179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.034320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.034354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.034516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.034732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.034763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.034883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.034917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.035092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.035319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.035354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.035668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.035945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.035977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.036246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.036278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.036499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.036623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.036655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.036852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.036884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.037176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.037215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.037400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.037433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.037676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.037709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.037904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.038212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.038245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.038502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.038536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.038711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.038744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.038953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.038985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.039260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.039431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.039465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.039590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.039621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.039888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.039920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.040082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.040269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.040304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.040499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.040531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.040716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.040749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.322 qpair failed and we were unable to recover it. 00:27:30.322 [2024-11-20 17:21:48.040863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.322 [2024-11-20 17:21:48.040895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.041134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.041166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.041462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.041495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.041673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.041706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.041904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.041935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.042189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.042230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.042460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.042730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.042761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.042886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.042918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.043049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.043082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.043255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.043289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.043490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.043528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.043712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.043744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.043984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.044097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.044134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.044325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.044361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.044565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.044596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.044860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.044893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.045024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.045056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.045248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.045282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.045477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.045510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.045698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.045732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.046164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.046546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.046695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.046848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.046879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.047068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.047289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.047323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.047499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.047539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.047719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.047751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.047934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.047967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.048246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.048281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.048404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.048436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.048536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.048747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.048780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.048900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.048932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.323 qpair failed and we were unable to recover it. 00:27:30.323 [2024-11-20 17:21:48.049249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.323 [2024-11-20 17:21:48.049322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.049547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.049581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.049725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.049759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.049880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.049912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.050107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.050139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.050343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.050376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.050626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.050658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.050795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.051033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.051250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.051284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.051525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.051557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.051753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.051785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.051971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.052003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.052190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.052244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.052382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.052506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.052538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.052750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.052780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.053016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.053048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.053241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.053275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.053562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.053594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.053815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.053920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.053952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.054274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.054412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.054626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.054660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.054842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.054873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.055056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.055088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.055304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.055509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.055542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.055787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.055824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.055943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.055974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.056160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.056194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.056409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.056631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.056665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.056796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.056827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.056956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.056989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.057136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.057313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.057347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.324 qpair failed and we were unable to recover it. 00:27:30.324 [2024-11-20 17:21:48.057539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.324 [2024-11-20 17:21:48.057570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.057743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.057774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.058010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.058044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.058241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.058275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.058518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.058550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.058682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.058715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.058932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.058964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.059133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.059165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.059299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.059333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.059453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.059485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.059728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.059758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.059939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.059970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.060099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.060130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.060315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.060471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.060502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.060673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.060710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.060886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.060917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.061104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.061135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.061309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.061341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.061555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.061768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.061799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.061914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.061946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.062053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.062084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.062257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.062288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.062528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.062559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.062743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.062774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.062887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.062917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.063136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.063277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.063309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.063526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.063557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.063694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.063725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.063895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.063927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.064115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.064146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.064298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.064332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.064545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.064576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.064681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.064712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.064976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.065007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.065180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.065221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.325 qpair failed and we were unable to recover it. 00:27:30.325 [2024-11-20 17:21:48.065470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.325 [2024-11-20 17:21:48.065501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.065758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.066003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.066034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.066234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.066268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.066479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.066550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.066870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.066907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.067102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.067135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.067267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.067566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.067598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.067807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.067839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.067959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.067990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.068164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.068195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.068332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.068365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.068472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.068502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.068758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.068789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.068971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.069003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.069105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.069135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.069267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.069309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.069488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.069519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.069762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.069793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.070963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.070995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.071272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.071305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.071495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.071526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.071651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.071683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.071800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.071831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.072099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.072228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.072507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.072538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.072794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.326 [2024-11-20 17:21:48.072826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.326 qpair failed and we were unable to recover it. 00:27:30.326 [2024-11-20 17:21:48.073142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.073174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.073312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.073344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.073553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.073584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.073769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.073801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.074061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.074093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.074322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.074355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.074543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.074574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.074834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.074866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.075000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.075032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.075254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.075287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.075533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.075566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.075672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.075704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.075822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.075853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.076098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.076130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.076346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.076525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.076556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.076674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.076706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.076975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.077006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.077264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.077314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.077498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.077529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.077724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.077756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.077939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.077971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.078227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.078261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.078535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.078579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.079006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.079038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.079279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.079505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.079537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.079729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.079761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.080009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.080040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.080177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.080229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.080415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.080618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.080650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.080909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.080940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.081063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.081094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.081292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.081326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.081591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.081621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.327 [2024-11-20 17:21:48.081926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.327 qpair failed and we were unable to recover it. 00:27:30.327 [2024-11-20 17:21:48.082047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.082079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.082314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.082347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.082635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.082666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.082797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.082829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.082969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.083001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.083245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.083278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.083519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.083551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.083733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.083765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.083894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.083925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.084044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.084075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.084264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.084297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.084526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.084557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.084800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.084832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.084947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.084979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.085110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.085142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.085392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.085426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.085603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.085634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.085755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.085787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.085990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.086022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.086146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.086177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.086293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.086499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.086530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.086713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.086745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.086989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.087021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.087199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.087444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.087482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.087721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.087752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.087952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.087985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.088118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.088149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.088418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.088453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.088721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.088753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.088938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.088970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.089217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.089394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.089426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.089656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.089687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.089809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.089841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.090003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.090211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.328 [2024-11-20 17:21:48.090244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.328 qpair failed and we were unable to recover it. 00:27:30.328 [2024-11-20 17:21:48.090427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.090458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.090646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.090679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.090941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.090973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.091237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.091270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.091377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.091409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.091608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.091639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.091809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.091840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.091963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.091995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.092262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.092558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.092588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.092827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.092858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.093034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.093066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.093265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.093298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.093487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.093518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.093714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.093752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.094010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.094041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.094278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.094311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.094550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.094582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.094769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.094801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.095025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.095199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.095240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.095509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.095541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.095642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.095673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.095851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.095882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.096943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.096974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.097224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.097258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.097472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.097503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.097635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.097667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.097916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.097946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.098179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.098219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.098322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.098354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.098544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.098576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.329 [2024-11-20 17:21:48.098778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.329 [2024-11-20 17:21:48.098809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.329 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.098915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.098947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.099224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.099257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.099426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.099458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.099600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.099632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.099752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.099783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.099970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.100002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.100238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.100271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.100535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.100567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.100748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.100778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.100953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.100984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.101165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.101197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.101380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.101412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.101549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.101581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.101760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.101791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.102004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.102035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.102227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.102409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.102447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.102742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.103922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.103952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.104135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.104166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.104364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.104397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.104660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.104692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.104873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.104904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.105021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.105051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.105168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.105199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.105397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.105429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.105621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.105652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.105835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.330 [2024-11-20 17:21:48.105866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.330 qpair failed and we were unable to recover it. 00:27:30.330 [2024-11-20 17:21:48.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.106140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.106277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.106481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.106511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.106733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.106920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.106952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.107148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.107179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.107426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.107666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.107698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.107955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.108246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.108279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.108527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.108559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.108754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.108786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.108962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.108993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.109189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.109231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.109486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.109518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.109782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.109813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.109993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.110025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.110239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.110271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.110484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.110515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.110630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.110662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.110931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.110961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.111081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.111112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.111236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.111269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.111535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.111712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.111744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.111870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.111901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.112081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.112112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.112362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.112399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.112517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.112549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.112761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.112793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.112979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.113010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.113229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.113262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.113451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.113482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.113694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.113726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.113859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.113890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.114080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.114372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.331 [2024-11-20 17:21:48.114404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.331 qpair failed and we were unable to recover it. 00:27:30.331 [2024-11-20 17:21:48.114609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.114641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.114816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.115271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.115302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.115571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.115602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.115799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.115830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.115953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.115984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.116169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.116200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.116397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.116429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.116618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.116649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.116764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.116795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.117069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.117244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.117402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.117590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.117872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.117998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.118030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.118228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.118262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.118381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.118689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.118720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.118843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.118875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.119021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.119230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.119262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.119502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.119533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.119748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.119778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.119962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.119993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.120187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.120255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.120501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.120753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.120784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.120989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.121371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.121577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.121732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.121961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.121992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.122125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.122241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.122416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.122448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.122639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.332 [2024-11-20 17:21:48.122672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.332 qpair failed and we were unable to recover it. 00:27:30.332 [2024-11-20 17:21:48.122862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.122893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.123066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.123098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.123295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.123329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.123544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.123576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.123702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.123733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.123919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.123951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.124149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.124180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.124451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.124483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.124617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.124649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.124761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.124791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.124909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.125073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.125105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.125292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.125325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.125511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.125542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.125791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.125822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.126943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.126975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.127240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.127273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.127454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.127486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.127660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.127692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.127832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.127949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.127981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.128185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.128243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.128366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.128398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.128603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.128640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.128918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.128950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.129171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.129211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.129455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.129487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.129610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.129641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.129748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.129779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.129886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.129917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.130086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.130118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.130335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.130367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.130558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.333 [2024-11-20 17:21:48.130589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.333 qpair failed and we were unable to recover it. 00:27:30.333 [2024-11-20 17:21:48.130830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.130862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.131051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.131081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.131212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.131244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.131432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.131463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.131649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.131681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.131941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.131972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.132241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.132275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.132559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.132591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.132845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.132876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.133013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.133046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.133236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.133269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.133452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.133693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.133725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.133910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.133941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.134142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.134174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.134307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.134340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.134459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.134490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.134671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.134704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.134897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.134929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.135195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.135235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.135427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.135643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.135675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.135845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.135877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.136117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.136147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.136434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.136467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.136641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.136672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.136934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.136965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.137217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.137250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.137394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.137426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.137653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.137684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.137955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.137993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.138183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.138224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.138348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.138380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.138575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.138739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.138771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.138959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.138990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.139174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.139214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.139403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.334 [2024-11-20 17:21:48.139435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.334 qpair failed and we were unable to recover it. 00:27:30.334 [2024-11-20 17:21:48.139641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.139673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.139958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.139989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.140168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.140199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.140343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.140375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.140560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.140590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.140856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.140888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.141083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.141116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.141315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.141348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.141468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.141499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.141818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.141849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.142033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.142064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.142242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.142275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.142423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.142454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.142640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.142671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.142956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.142987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.143268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.143301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.143436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.143468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.143678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.143709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.143960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.143992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.144174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.144215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.144354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.144387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.144578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.144610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.144791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.144823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.145011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.145043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.145284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.145318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.145489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.145521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.145705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.145736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.145944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.145977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.146248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.146281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.146465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.146497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.146623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.146656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.146789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.146820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.147006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.335 [2024-11-20 17:21:48.147043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.335 qpair failed and we were unable to recover it. 00:27:30.335 [2024-11-20 17:21:48.147228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.147262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.147468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.147499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.147649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.147680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.147833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.148028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.148059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.148247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.148462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.148494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.148611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.148642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.148774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.148805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.149000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.149030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.149184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.149397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.149430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.149625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.149656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.149915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.149947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.150126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.150158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.150289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.150322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.150457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.150489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.150758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.150789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.150919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.150951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.151157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.151188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.151333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.151365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.151502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.151532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.151726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.151758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.151999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.152030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.152223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.152255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.152444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.152476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.152655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.152687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.152901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.153167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.153199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.153457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.153490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.153644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.153815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.153846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.154052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.154084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.154299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.154332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.154594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.154626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.154838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.154870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.155083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.155115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.155253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.336 [2024-11-20 17:21:48.155286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.336 qpair failed and we were unable to recover it. 00:27:30.336 [2024-11-20 17:21:48.155421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.155663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.155700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.155892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.155924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.156175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.156229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.156359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.156391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.156569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.156601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.156783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.156815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.157001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.157214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.157247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.157483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.157515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.157655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.157822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.157855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.158069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.158101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.158229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.158264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.158492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.158788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.158823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.158940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.158973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.159102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.159135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.159323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.159357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.159548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.159580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.159765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.159797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.160885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.160916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.161095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.161127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.161382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.161416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.161524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.161555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.161737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.161768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.161873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.161905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.162107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.162138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.162332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.162365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.162484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.162515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.162700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.162732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.162849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.162880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.163000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.337 [2024-11-20 17:21:48.163031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.337 qpair failed and we were unable to recover it. 00:27:30.337 [2024-11-20 17:21:48.163162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.163194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.163395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.163427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.163697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.163728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.163833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.163871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.163989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.164020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.164243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.164276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.164449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.164480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.164721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.164752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.164891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.164923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.165167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.165200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.165408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.165440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.165569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.165600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.165722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.165753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.165897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.165929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.166047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.166079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.166217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.166257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.166468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.166509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.166709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.166845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.166889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.167092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.167133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.167342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.167499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.167531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.167669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.167701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.167808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.167840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.168023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.168054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.168248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.168281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.168464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.168496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.168748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.168780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.168993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.169168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.169199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.169348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.169381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.169501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.169533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.169723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.169755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.169881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.169912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.170151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.170183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.170334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.170366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.170490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.170521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.338 [2024-11-20 17:21:48.170788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.338 [2024-11-20 17:21:48.170820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.338 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.171937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.171974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.172153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.172185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.172314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.172346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.172451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.172483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.172659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.172691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.172905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.172936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.173224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.173346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.173378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.173574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.173605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.173781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.173812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.173917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.173949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.174141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.174173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.174302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.174335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.174449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.174480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.174607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.174639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.174825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.174856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.175277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.175422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.175693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.176144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.176175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.176463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.176495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.176631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.176663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.176834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.176865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.176971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.177003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.177318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.177449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.177485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.177664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.177695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.177871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.177903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.339 qpair failed and we were unable to recover it. 00:27:30.339 [2024-11-20 17:21:48.178080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.339 [2024-11-20 17:21:48.178112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.178234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.178267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.178502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.178533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.178723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.178755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.178883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.178915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.179120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.179150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.179335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.179368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.179504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.179535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.179681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.179834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.180026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.180057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.180185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.180224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.180411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.180442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.180614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.180645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.180841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.180873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.181065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.181095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.181289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.181336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.181457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.181683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.181713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.181829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.181859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.182929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.182960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.183150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.183181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.183319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.183351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.183460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.183491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.183684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.183716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.183821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.183852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.184060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.184092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.184332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.184365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.184570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.184601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.184717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.340 [2024-11-20 17:21:48.184748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.340 qpair failed and we were unable to recover it. 00:27:30.340 [2024-11-20 17:21:48.184855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.184886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.185119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.185190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.185374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.185410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.185677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.185709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.185882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.185913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.186109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.186140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.186381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.186412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.186614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.186644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.186920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.186951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.187223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.187255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.187446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.187478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.187608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.187639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.187901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.187932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.188198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.188238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.188344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.188385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.188603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.188633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.188897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.188929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.189107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.189140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.189323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.189599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.189630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.189825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.189856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.189974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.190005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.190126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.190158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.190368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.190400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.190591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.190863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.190893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.191836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.192047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.192182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.192406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.192611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.341 [2024-11-20 17:21:48.192769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.341 qpair failed and we were unable to recover it. 00:27:30.341 [2024-11-20 17:21:48.192897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.192929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.193062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.193093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.193226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.193260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.193689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.193720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.193983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.194200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.194367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.194637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.194787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.194948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.194978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.195111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.195142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.195343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.195376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.195496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.195526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.195628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.195659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.195831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.195862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.196073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.196353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.196494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.196656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.196792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.196972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.197003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.197250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.197286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.197508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.197539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.197825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.197856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.198046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.198076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.198281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.198314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.198442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.198474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.198598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.198628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.198752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.198785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.199036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.199067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.199279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.199313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.199483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.199515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.199754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.199785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.199920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.199951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.200078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.200111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.200317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.200351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.200463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.342 [2024-11-20 17:21:48.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.342 qpair failed and we were unable to recover it. 00:27:30.342 [2024-11-20 17:21:48.200735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.200767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.200960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.200991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.201163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.201196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.201339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.201372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.201558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.201590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.201701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.201731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.201843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.201880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.202056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.202088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.202274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.202309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.202424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.202456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.202718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.202749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.202947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.202979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.203160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.203190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.203418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.203450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.203636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.203667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.203842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.203873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.204069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.204100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.204254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.204448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.204480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.204669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.204701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.204814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.204847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.205031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.205062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.205172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.205222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.205332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.205364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.205625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.205657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.205921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.205951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.206194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.206242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.206381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.206413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.206652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.206684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.206884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.206915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.207093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.207125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.207257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.207290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.207531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.207564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.207737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.207769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.207897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.207930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.208122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.208153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.208377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.208411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.208626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.343 [2024-11-20 17:21:48.208656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.343 qpair failed and we were unable to recover it. 00:27:30.343 [2024-11-20 17:21:48.208911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.208943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.209151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.209184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.209369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.209636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.209861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.209891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.210152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.210183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.210386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.210419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.210666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.210698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.210810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.210847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.211099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.211130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.211397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.211430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.211624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.211655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.211889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.211922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.212113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.212145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.212342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.212376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.212570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.212604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.212723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.212755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.212932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.212964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.213139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.213173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.213302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.213334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.213582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.213614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.213802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.213834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.214060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.214287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.214320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.214496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.214526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.214987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.215018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.215303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.215335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.215510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.215542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.215666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.215698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.215814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.215845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.215971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.216003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.216189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.216243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.216433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.344 [2024-11-20 17:21:48.216466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.344 qpair failed and we were unable to recover it. 00:27:30.344 [2024-11-20 17:21:48.216707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.216740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.216921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.216954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.217060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.217090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.217226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.217261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.217584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.217655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.217945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.217980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.218195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.218246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.218559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.218591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.218778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.218810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.219005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.219035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.219233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.219266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.219386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.219418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.219661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.219691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.219966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.219998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.220166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.220363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.220516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.220547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.220717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.220750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.220967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.221155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.221186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.221442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.221475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.221660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.221693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.221932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.221965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.222215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.222247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.222366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.222398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.222605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.222792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.222823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.222994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.223026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.223241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.223275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.223445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.223476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.223591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.223622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.223812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.223846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.224022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.224053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.224320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.224353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.224622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.224653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.224971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.225236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.225269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.225407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.345 [2024-11-20 17:21:48.225438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.345 qpair failed and we were unable to recover it. 00:27:30.345 [2024-11-20 17:21:48.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.225653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.225844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.225877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.226012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.226045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.226159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.226192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.226343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.226373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.226545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.226577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.226838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.226868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.227131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.227162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.227301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.227335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.227465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.227495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.227740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.227772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.227985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.228130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.228374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.228517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.228863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.228998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.229037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.229247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.229282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.229483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.229514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.229759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.229791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.230053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.230084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.230272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.230304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.230482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.230516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.230784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.230817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.231002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.231035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.231145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.231177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.231430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.231462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.231645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.231676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.231812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.231844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.232948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.232981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.233274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.233309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.233439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.346 [2024-11-20 17:21:48.233473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.346 qpair failed and we were unable to recover it. 00:27:30.346 [2024-11-20 17:21:48.233665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.233696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.233836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.233867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.234047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.234078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.234346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.234378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.234576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.234609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.234738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.234770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.234903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.234936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.235110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.235143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.235271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.235303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.235561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.235593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.235800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.235832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.236011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.236045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.236173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.236212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.236401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.236432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.236563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.236595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.236779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.236812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.237012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.237046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.237173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.237212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.237432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.237470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.237688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.237863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.237896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.238106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.238138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.238355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.238528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.238559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.238753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.238787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.239042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.239075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.239300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.239497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.239528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.239761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.239795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.239928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.239959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.240087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.240120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.240237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.240270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.240463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.240494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.240627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.240659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.240897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.240929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.241046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.241077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.241182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.241223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.347 [2024-11-20 17:21:48.241415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.347 [2024-11-20 17:21:48.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.347 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.241623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.241655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.241854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.241888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.242078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.242110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.242394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.242427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.242602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.242633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.242825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.242857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.243051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.243081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.243330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.243362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.243571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.243602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.243907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.244107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.244336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.244371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.244639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.244672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.244786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.244819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.245019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.245060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.245179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.245223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.245416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.245450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.245726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.245903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.245935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.246116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.246147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.246352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.246392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.246678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.246709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.246929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.246960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.247170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.247224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.247523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.247555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.247683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.247717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.247907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.247938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.248175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.248217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.248429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.248462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.248590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.248621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.248860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.248893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.249073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.249107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.249373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.249408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.249599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.249631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.249823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.249855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.250046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.250077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.250310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.250344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.348 qpair failed and we were unable to recover it. 00:27:30.348 [2024-11-20 17:21:48.250585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.348 [2024-11-20 17:21:48.250618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.250752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.250785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.250972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.251004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.251192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.251234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.251435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.251468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.251647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.251680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.251923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.251955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.252081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.252114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.252318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.252351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.252580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.252711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.252986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.253017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.253218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.253431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.253712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.253745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.253951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.253983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.254251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.254285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.254497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.254699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.254732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.254914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.254946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.255099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.255342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.255377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.255517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.255550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.255732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.255769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.256104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.256137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.256411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.256444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.256637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.256668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.256778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.256810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.256932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.256963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.257171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.257209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.257401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.257434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.257540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.257572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.257747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.349 [2024-11-20 17:21:48.257779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.349 qpair failed and we were unable to recover it. 00:27:30.349 [2024-11-20 17:21:48.257913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.257946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.258130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.258161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.258413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.258448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.258676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.258710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.258894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.258927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.259064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.259096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.259283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.259318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.259520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.259559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.259677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.259847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.259879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.260075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.260107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.260254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.260287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.260394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.260426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.260607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.260639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.260810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.260841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.261954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.261985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.262109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.262140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.262259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.262291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.262475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.262506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.262610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.262642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.262893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.262924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.263101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.263132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.263264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.263298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.263411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.263628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.263666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.263906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.263938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.264114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.264145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.264332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.264365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.264502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.264533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.264758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.264789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.264906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.264937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.265075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.265228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.350 [2024-11-20 17:21:48.265260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.350 qpair failed and we were unable to recover it. 00:27:30.350 [2024-11-20 17:21:48.265455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.265487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.265695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.265727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.265964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.265995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.266242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.266274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.266393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.266425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.266599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.266810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.266842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.267047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.267184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.267470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.267808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.267996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.268028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.268212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.268426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.268458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.268653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.268684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.268935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.268966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.269149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.269180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.269458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.269492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.269613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.269644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.269905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.270957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.270989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.271163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.271194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.271412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.271444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.271705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.271736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.271938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.271969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.272101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.272139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.272323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.272355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.272510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.272724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.272756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.272861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.272892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.273131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.273163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.351 [2024-11-20 17:21:48.273379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.351 [2024-11-20 17:21:48.273411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.351 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.273620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.273862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.273893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.274101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.274132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.274325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.274358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.274483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.274514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.274687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.274719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.274903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.274934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.275135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.275166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.275369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.275402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.275581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.275612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.275731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.275762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.275871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.275903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.276079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.276110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.276325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.276576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.276608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.276803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.276835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.277112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.277143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.277390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.277422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.277611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.277643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.277819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.277850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.278049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.278082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.278271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.278304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.278536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.278571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.278821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.278852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.278961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.278992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.279105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.279137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.279263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.279297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.279513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.279843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.279978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.280212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.280527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.280667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.352 [2024-11-20 17:21:48.280827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.352 [2024-11-20 17:21:48.280858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.352 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.280976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.281130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.281358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.281564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.281714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.281925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.281957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.282137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.282168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.282383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.282415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.282601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.282632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.282837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.282868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.283062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.283094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.283281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.283315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.283523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.283554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.283709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.283740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.283934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.283964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.284222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.284255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.284506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.284538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.284643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.284674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.284865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.284897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.285071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.285102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.285274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.285538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.285570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.285709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.285741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.285979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.286010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.286149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.286180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.286470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.286503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.286640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.286671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.286790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.286822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.287268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.287480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.287670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.287876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.353 qpair failed and we were unable to recover it. 00:27:30.353 [2024-11-20 17:21:48.287990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.353 [2024-11-20 17:21:48.288021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.288264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.288466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.288497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.288618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.288650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.288818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.288850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.289058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.289096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.289292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.289602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.289633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.289870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.290104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.290135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.290275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.290308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.290571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.290602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.290731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.290762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.291002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.291033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.291156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.291187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.291408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.291440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.291868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.291900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.292089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.292120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.292393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.292427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.292715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.292973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.293004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.293190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.293229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.293410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.293442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.293639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.293930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.293961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.294145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.294176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.294442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.294474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.294742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.294773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.294968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.295346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.295495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.295709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.295865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.295897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.296113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.296144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.296259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.296292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.296538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.354 [2024-11-20 17:21:48.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.354 qpair failed and we were unable to recover it. 00:27:30.354 [2024-11-20 17:21:48.296839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.296870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.296991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.297022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.297198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.297241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.297482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.297513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.297746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.297777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.297992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.298024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.298487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.298731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.298763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.298938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.298969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.299107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.299138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.299402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.299435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.299645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.299676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.299917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.299949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.300135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.300166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.300371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.300403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.300536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.300567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.300740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.300771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.301010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.301285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.301318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.301565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.301596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.301784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.301816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.302952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.302983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.303100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.303132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.303338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.303458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.303490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.303701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.303732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.304957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.355 [2024-11-20 17:21:48.304987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.355 qpair failed and we were unable to recover it. 00:27:30.355 [2024-11-20 17:21:48.305250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.305282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.305477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.305509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.305641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.305671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.305924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.305955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.306128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.306159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.306409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.306441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.306718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.306750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.306869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.307102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.307390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.307676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.307707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.307960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.307991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.308235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.308445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.308476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.308651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.308682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.308870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.308901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.309087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.309118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.309384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.309417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.309605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.309636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.309827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.309859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.310050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.310082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.310301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.310425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.310456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.310665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.310697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.310965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.310996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.311294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.311327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.311451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.311482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.311754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.311969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.312000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.312120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.312151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.312360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.312393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.312580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.312612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.312819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.312851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.313039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.313071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.313261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.313294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.313574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.313606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.313734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.313767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.356 [2024-11-20 17:21:48.313886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.356 [2024-11-20 17:21:48.313918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.356 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.314154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.314185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.314441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.314473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.314716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.314747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.314943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.315185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.315245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.315429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.315461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.315676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.315707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.315883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.315914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.316100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.316133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.316320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.316353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.316647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.316679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.316951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.316982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.317115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.317147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.317450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.317483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.317759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.317790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.317978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.318009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.318197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.318238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.318417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.318448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.318560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.318591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.318830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.318861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.319047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.319078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.319293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.319457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.319489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.319676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.319707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.319880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.319911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.320111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.320360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.320393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.320524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.320555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.320734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.320765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.320950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.320982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.321088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.321119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.321361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.321393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.321505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.321537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.321802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.321833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.321952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.321983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.322181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.322219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.322474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.322506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.322683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.322714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.322822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.357 [2024-11-20 17:21:48.322859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.357 qpair failed and we were unable to recover it. 00:27:30.357 [2024-11-20 17:21:48.322987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.323020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.323222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.323254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.323388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.323420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.323529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.323560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.323827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.324043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.324074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.324341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.324374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.324570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.324602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.324707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.324738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.324856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.324887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.325020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.325052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.325295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.325327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.325521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.325819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.325851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.326050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.326081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.326269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.326302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.326541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.326572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.326762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.326794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.326990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.327291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.327324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.327510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.327542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.327764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.327795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.327997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.328029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.328328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.328551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.328585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.328728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.328760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.328958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.328996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.329149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.329415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.329461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.329668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.329875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.329905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.330187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.330229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.330426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.330457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.330651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.330681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.330812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.330843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.331083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.331114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.358 qpair failed and we were unable to recover it. 00:27:30.358 [2024-11-20 17:21:48.331335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.358 [2024-11-20 17:21:48.331380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.331549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.331595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.331805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.331841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.332048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.332089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.332376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.332409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.332525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.332556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.332818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.332850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.333104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.333135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.333377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.333421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.333651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.333698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.359 [2024-11-20 17:21:48.333896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.359 [2024-11-20 17:21:48.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.359 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.334038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.334070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.334323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.334355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.334641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.334673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.334865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.334896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.335083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.335114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.335306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.335471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.335502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.335748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.335779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.335985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.336015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.336197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.336242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.336432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.336462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.336575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.336605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.336897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.336927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.337115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.337146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.337346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.337377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.337498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.337529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.337782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.337812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.337992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.338022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.338286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.338318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.338474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.338601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.338631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.338921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.338951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.339219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.339251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.637 qpair failed and we were unable to recover it. 00:27:30.637 [2024-11-20 17:21:48.339458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.637 [2024-11-20 17:21:48.339489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.339821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.339853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.340071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.340101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.340244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.340275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.340496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.340527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.340721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.340751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.340920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.340950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.341137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.341168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.341418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.341450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.341565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.341602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.341725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.341755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.341951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.341983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.342294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.342327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.342581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.342611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.342786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.342817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.343078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.343109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.343300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.343332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.343527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.343559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.343748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.343778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.343947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.343977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.344222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.344484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.344514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.344637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.344667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.344910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.345070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.345100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.345292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.345325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.345498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.345529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.345752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.345913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.345944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.346155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.346186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.346400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.346431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.346615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.638 [2024-11-20 17:21:48.346646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.638 qpair failed and we were unable to recover it. 00:27:30.638 [2024-11-20 17:21:48.346934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.346966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.347153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.347387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.347419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.347698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.347729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.347925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.347956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.348134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.348165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.348429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.348462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.348659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.348689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.348930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.348960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.349079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.349110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.349299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.349331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.349459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.349490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.349692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.349723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.349902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.349932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.350130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.350160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.350357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.350389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.350561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.350591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.350813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.350850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.351056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.351086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.351285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.351317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.351488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.351519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.351709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.351740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.351865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.351896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.352108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.352138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.352331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.352363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.352607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.352771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.352952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.352983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.353093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.353124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.353398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.353431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.353698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.353728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.353909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.353940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.354172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.354210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.639 [2024-11-20 17:21:48.354456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.639 [2024-11-20 17:21:48.354486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.639 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.354691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.354882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.354912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.355085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.355115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.355313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.355345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.355561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.355591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.355831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.355861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.356050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.356080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.356222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.356255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.356454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.356484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.356691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.356723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.356910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.356941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.357137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.357168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.357418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.357451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.357627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.357656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.357869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.357899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.358089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.358119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.358317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.358349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.358533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.358564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.358830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.358861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.359142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.359329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.359360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.359669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.359699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.359906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.359937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.360114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.360155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.360459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.360491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.360756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.360787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.360917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.360948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.361156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.361187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.361386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.361416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.361554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.361585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.361778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.361809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.362052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.640 qpair failed and we were unable to recover it. 00:27:30.640 [2024-11-20 17:21:48.362258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.640 [2024-11-20 17:21:48.362290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.362477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.362508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.362696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.362726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.362963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.362994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.363173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.363223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.363415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.363447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.363654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.363685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.363898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.363928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.364104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.364134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.364374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.364407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.364634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.364664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.364855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.364885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.365059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.365089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.365278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.365310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.365429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.365459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.365729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.365760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.365949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.365980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.366121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.366151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.366389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.366568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.366599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.366788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.366819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.366943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.366973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.367218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.367251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.367427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.367456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.367648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.367679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.367886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.367917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.368061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.368237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.368393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.368627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.368778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.369009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.369142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.369173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.369301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.641 [2024-11-20 17:21:48.369331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.641 qpair failed and we were unable to recover it. 00:27:30.641 [2024-11-20 17:21:48.369596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.369627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.369824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.369855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.370077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.370224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.370256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.370449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.370479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.370718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.370749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.370942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.370972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.371154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.371184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.371391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.371423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.371609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.371639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.371822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.371852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.372330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.372363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.372630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.372662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.372903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.372933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.373102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.373133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.373326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.373358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.373538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.373567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.373778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.373809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.374062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.374093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.374215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.374247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.374677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.374707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.374835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.374866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.375046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.375077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.375257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.375289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.375478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.375508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.375647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.375677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.642 [2024-11-20 17:21:48.375849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.642 [2024-11-20 17:21:48.375880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.642 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.376157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.376187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.376316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.376347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.376522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.376554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.376805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.376835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.377960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.377991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.378235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.378267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.378462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.378493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.378695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.378728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.378913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.378944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.379133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.379164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.379315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.379349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.379458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.379489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.379680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.379711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.379894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.379925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.380107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.380138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.380268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.380301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.380490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.380521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.380795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.380827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.381019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.381050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.381209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.381398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.381430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.381625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.381657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.381925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.381956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.382198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.382241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.382456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.382488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.382622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.382920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.382951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.383222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.383255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.643 qpair failed and we were unable to recover it. 00:27:30.643 [2024-11-20 17:21:48.383461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.643 [2024-11-20 17:21:48.383492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.383689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.383720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.383840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.383871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.384076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.384107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.384351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.384568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.384599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.384789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.384821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.384995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.385026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.385146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.385176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.385428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.385460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.385584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.385615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.385831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.385862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.386036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.386066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.386253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.386284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.386461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.386492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.386675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.386712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.386894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.386925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.387107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.387138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.387323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.387356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.387466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.387497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.387738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.387769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.388497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.388713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.388875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.388989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.389171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.389472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.389641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.389796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.389947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.389978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.390164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.644 [2024-11-20 17:21:48.390196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.644 qpair failed and we were unable to recover it. 00:27:30.644 [2024-11-20 17:21:48.390419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.390450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.390644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.390675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.390794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.390825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.391016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.391234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.391267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.391409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.391441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.391655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.391685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.391805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.391835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.392012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.392044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.392333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.392367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.392636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.392667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.392860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.392891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.393082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.393113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.393254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.393286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.393422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.393454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.393690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.393722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.393982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.394013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.394143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.394175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.394371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.394402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.394640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.394671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.394793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.394824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.395006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.395037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.395282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.395320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.395506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.395537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.395721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.395753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.395858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.395889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.396076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.396107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.396223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.396256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.396454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.396485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.396659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.396690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.396882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.396913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.397179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.397401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.397707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.397737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.397924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.645 [2024-11-20 17:21:48.397956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.645 qpair failed and we were unable to recover it. 00:27:30.645 [2024-11-20 17:21:48.398166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.398198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.398415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.398447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.398644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.398674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.398880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.398911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.399148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.399179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.399431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.399463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.399648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.399680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.399861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.399892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.400114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.400144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.400271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.400304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.400442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.400485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.400783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.400816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.400998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.401032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.401167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.401198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.401436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.401484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.401793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.401831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.402027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.402059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.402240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.402273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.402516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.402547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.402656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.402687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.402952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.403211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.403244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.403436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.403467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.403642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.403673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.403940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.403972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.404221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.404253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.404519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.404551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.404742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.404786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.404956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.405193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.405234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.405451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.405483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.405685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.405716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.405979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.406010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.406277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.406311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.406425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.646 [2024-11-20 17:21:48.406455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.646 qpair failed and we were unable to recover it. 00:27:30.646 [2024-11-20 17:21:48.406659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.406690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.406880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.407115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.407146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.407415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.407447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.407657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.407798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.407942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.407974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.408163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.408194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.408393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.408610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.408641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.408789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.409055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.409087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.409221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.409253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.409441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.409472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.409663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.409694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.409884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.409916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.410180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.410237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.410506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.410538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.410719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.410750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.410882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.410913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.411045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.411076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.411266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.411298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.411472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.411503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.411611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.411642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.411826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.411856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.412117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.412261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.412411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.412549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.647 [2024-11-20 17:21:48.412774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.647 qpair failed and we were unable to recover it. 00:27:30.647 [2024-11-20 17:21:48.412900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.412931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.413170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.413210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.413346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.413384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.413559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.413590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.413777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.413808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.414126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.414343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.414711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.414876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.415012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.415239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.415481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.415513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.415634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.415666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.415854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.415884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.416084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.416115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.416309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.416342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.416517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.416547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.416665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.416696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.416881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.417882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.417913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.418369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.418401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.418635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.418666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.418813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.418844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.419103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.419134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.419330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.419362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.419564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.419595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.419840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.419871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.648 qpair failed and we were unable to recover it. 00:27:30.648 [2024-11-20 17:21:48.420011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.648 [2024-11-20 17:21:48.420042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.420283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.420316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.420503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.420534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.420658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.420689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.420864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.420895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.421939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.421970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.422146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.422177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.422368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.422400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.422624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.422655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.422842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.422873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.423166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.423197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.423409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.423441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.423642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.423674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.423916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.423947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.424056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.424087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.424399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.424432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.424635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.424667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.424799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.424830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.425027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.425058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.425247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.425279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.425477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.425507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.425739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.425771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.426038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.426069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.426256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.426288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.426421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.426452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.426588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.426619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.426813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.426845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.427015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.427046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.427240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.427273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.427448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.649 [2024-11-20 17:21:48.427480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:30.649 qpair failed and we were unable to recover it. 00:27:30.649 [2024-11-20 17:21:48.427703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.427775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.427985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.428021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.428251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.428284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.428422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.428455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.428601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.428632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.428874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.428906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.429099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.429131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.429393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.429426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.429694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.429726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.429931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.429962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.430153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.430185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.430400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.430432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.430634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.430667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.430810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.430842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.430969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.431000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.431301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.431334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.431463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.431495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.431678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.431710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.431905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.431936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.432199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.432240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.432379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.432411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.432607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.432638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.432745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.432777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.432982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.433015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.433192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.433232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.433406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.433438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.433707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.433739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.433970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.434231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.434264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.434478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.434510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.434699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.434731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.435002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.435034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.435225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.435258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.435472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.435647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.435680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.650 qpair failed and we were unable to recover it. 00:27:30.650 [2024-11-20 17:21:48.435861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.650 [2024-11-20 17:21:48.435893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.436064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.436096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.436358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.436390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.436662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.436694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.436882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.436914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.437105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.437143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.437279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.437313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.437661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.437777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.437809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.437964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.438263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.438446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.438478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.438901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.438932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.439116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.439148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.439362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.439395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.439668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.439896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.439929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.440058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.440089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.440299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.440333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.440455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.440487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.440701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.440732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.440942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.440973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.441250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.441283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.441463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.441495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.441698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.441728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.441938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.442162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.442375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.442406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.442524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.442556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.442799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.442830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.443037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.443069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.443266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.443300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.443441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.443473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.443760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.651 [2024-11-20 17:21:48.443792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.651 qpair failed and we were unable to recover it. 00:27:30.651 [2024-11-20 17:21:48.443988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.444020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.444270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.444304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.444541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.444573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.444788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.444820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.445084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.445116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.445296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.445329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.445596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.445628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.445820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.445851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.446112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.446276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.446309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.446555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.446592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.446882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.447233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.447266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.447538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.447570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.447791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.447823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.448222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.448255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.448434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.448465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.448573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.448604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.448780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.448811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.449022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.449053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.449245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.449279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.449469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.449501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.449643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.449675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.449867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.449899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.450196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.450509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.450541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.450751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.450783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.451044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.451076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.451283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.451316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.451573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.451605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.652 qpair failed and we were unable to recover it. 00:27:30.652 [2024-11-20 17:21:48.451886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.652 [2024-11-20 17:21:48.451918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.452109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.452141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.452407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.452441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.452625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.452657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.452921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.452953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.453241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.453274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.453465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.453497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.453646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.453678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.453979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.454270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.454303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.454553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.454585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.454717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.454749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.454926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.454958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.455148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.455179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.455508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.455790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.456050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.456081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.456312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.456344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.456523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.456554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.456796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.456834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.456969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.457001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.457228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.457261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.457445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.457477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.457671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.457701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.457912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.457944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.458155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.458187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.458386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.458659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.458795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.458827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.459067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.459099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.459342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.459375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.459566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.459598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.459875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.459906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.460098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.460131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.460397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.653 [2024-11-20 17:21:48.460431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.653 qpair failed and we were unable to recover it. 00:27:30.653 [2024-11-20 17:21:48.460646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.460677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.460869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.460900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.461166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.461198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.461480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.461512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.461818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.461850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.462107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.462139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.462397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.462430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.462673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.462705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.462882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.462913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.463105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.463137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.463310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.463343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.463642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.463674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.463933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.463965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.464219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.464252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.464496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.464528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.464792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.464823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.465045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.465077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.465271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.465304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.465482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.465514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.465755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.465787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.466079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.466110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.466377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.466410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.466696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.466728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.467003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.467035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.467171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.467237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.467507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.467539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.467790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.467822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.468082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.468113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.468353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.468387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.468586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.468618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.468909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.468941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.469213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.469246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.469536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.469567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.469829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.469861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.470158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.654 [2024-11-20 17:21:48.470189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.654 qpair failed and we were unable to recover it. 00:27:30.654 [2024-11-20 17:21:48.470388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.470420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.470691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.470722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.470914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.470946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.471210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.471243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.471532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.471564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.471827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.471859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.472153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.472185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.472500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.472770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.472803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.472977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.473008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.473248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.473282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.473514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.473792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.473824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.474134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.474166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.474459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.474662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.474694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.475000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.475240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.475273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.475532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.475750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.475781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.475970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.476001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.476215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.476514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.476783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.476814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.477110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.477141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.477404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.477437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.477734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.477765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.477963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.477994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.478166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.478198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.478474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.478511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.478708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.478739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.478991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.479023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.479264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.479298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.479563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.479595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.655 [2024-11-20 17:21:48.479886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.655 [2024-11-20 17:21:48.479918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.655 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.480192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.480233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.480500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.480532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.480739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.480771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.480988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.481019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.481151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.481182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.481498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.481531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.481802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.481833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.482106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.482138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.482330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.482363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.482551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.482583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.482845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.482876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.483013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.483044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.483314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.483348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.483622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.483653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.483939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.483971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.484250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.484283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.484564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.484596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.484776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.484808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.485107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.485365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.485398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.485696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.485728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.485929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.486173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.486215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.486350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.486382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.486558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.486590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.486800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.486832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.487028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.487059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.487287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.487496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.487528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.487792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.487823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.488093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.488124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.488345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.488486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.488516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.656 [2024-11-20 17:21:48.488800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.656 [2024-11-20 17:21:48.488831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.656 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.489078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.489116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.489314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.489347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.489556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.489587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.489857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.489889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.490092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.490123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.490372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.490405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.490646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.490678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.490949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.490981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.491253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.491286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.491550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.491582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.491713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.491745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.491888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.491919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.492103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.492135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.492414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.492447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.492726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.492758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.493040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.493073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.493586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.493617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.493870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.493902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.494149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.494181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.494477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.494510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.494811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.495085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.495117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.495320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.495354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.495529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.495560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.495773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.495804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.495932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.495964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.496280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.496313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.496579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.496611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.496855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.496887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.497086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.497117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.657 qpair failed and we were unable to recover it. 00:27:30.657 [2024-11-20 17:21:48.497319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.657 [2024-11-20 17:21:48.497353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.497626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.497658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.497947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.498199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.498241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.498447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.498480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.498716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.498748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.498971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.499003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.499251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.499285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.499487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.499520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.499737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.499774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.500043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.500075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.500281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.500314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.500503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.500534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.500804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.500835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.501079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.501110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.501354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.501388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.501605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.501825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.502094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.502126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.502324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.502357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.502609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.502640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.502840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.502871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.503141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.503172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.503465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.503498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.503793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.503825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.504122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.504394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.504428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.504726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.504758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.504912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.505222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.505254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.505525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.505557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.505843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.505874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.506148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.506180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.506411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.506443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.506753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.506785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.507051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.507083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.507364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.507398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.507679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.507710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.658 qpair failed and we were unable to recover it. 00:27:30.658 [2024-11-20 17:21:48.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.658 [2024-11-20 17:21:48.507925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.508168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.508200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.508457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.508488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.508732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.508764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.509032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.509065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.509311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.509587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.509619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.509827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.510002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.510034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.510293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.510326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.510610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.510642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.510915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.510953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.511171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.511211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.511483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.511516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.511797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.511828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.512090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.512121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.512259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.512293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.512482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.512513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.512704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.512735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.512924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.513239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.513272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.513545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.513577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.513863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.514172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.514213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.514490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.514522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.514800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.514832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.515055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.515087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.515348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.515381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.515632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.515664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.515961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.516176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.516214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.516492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.516524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.516730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.516762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.517016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.517191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.517257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.517557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.517589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.517782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.517814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.518084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.518116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.659 qpair failed and we were unable to recover it. 00:27:30.659 [2024-11-20 17:21:48.518400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.659 [2024-11-20 17:21:48.518435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.518646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.518677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.518919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.518951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.519222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.519255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.519503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.519536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.519813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.519845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.520090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.520123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.520390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.520424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.520671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.520702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.520889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.520921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.521177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.521233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.521567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.521599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.521875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.521907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.522102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.522139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.522441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.522474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.522683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.522715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.522988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.523020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.523296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.523330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.523558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.523766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.523798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.523987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.524018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.524269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.524302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.524527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.524559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.524798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.524829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.525009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.525041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.525290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.525615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.525648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.525919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.525951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.526130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.526161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.526359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.526392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.526584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.526616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.526901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.527146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.527381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.527681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.527712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.527983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.528014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.528295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.528541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.528572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.660 [2024-11-20 17:21:48.528818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.660 [2024-11-20 17:21:48.528851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.660 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.529075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.529107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.529380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.529414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.529701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.529734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.529910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.529942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.530223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.530256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.530397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.530744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.530777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.531030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.531062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.531370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.531403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.531594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.531626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.531818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.532095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.532126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.532258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.532292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.532555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.532587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.532780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.532817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.533091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.533123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.533400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.533434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.533691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.533722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.533919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.533950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.534250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.534283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.534530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.534562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.534870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.534902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.535152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.535183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.535448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.535480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.535776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.535809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.536104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.536136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.536401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.536435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.536630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.536662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.536866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.536898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.537174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.537225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.537492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.537524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.537731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.537763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.538040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.538073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.538321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.538355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.538550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.538581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.538758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.538789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.539037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.539069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.661 [2024-11-20 17:21:48.539265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.661 [2024-11-20 17:21:48.539299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.661 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.539575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.539607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.539887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.539918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.540212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.540546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.540825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.540858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.541154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.541186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.541457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.541489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.541761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.541990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.542211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.542244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.542513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.542545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.542767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.542799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.543069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.543100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.543397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.543432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.543654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.543686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.543813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.543845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.544021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.544059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.544309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.544343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.544592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.544624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.544892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.544924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.545241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.545275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.545472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.545503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.545701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.545733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.545993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.546025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.546318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.546351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.546622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.546672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.546975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.547223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.547256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.547483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.547515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.547769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.547801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.548087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.548119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.548400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.548434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.548718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.548750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.662 [2024-11-20 17:21:48.549032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.662 [2024-11-20 17:21:48.549064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.662 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.549283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.549317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.549455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.549754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.549787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.549923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.549955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.550157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.550422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.550455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.550736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.550768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.551036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.551068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.551318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.551352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.551631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.551663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.551874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.552158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.552190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.552498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.552531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.552758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.552944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.552975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.553253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.553286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.553508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.553540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.553743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.553775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.554026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.554059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.554311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.554344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.554542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.554575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.554856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.554888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.555173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.555220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.555475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.555507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.555765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.555797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.556053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.556084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.556390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.556422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.556686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.556718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.556856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.557089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.557121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.557399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.557434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.557715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.557748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.558029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.558062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.558349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.558382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.558604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.558636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.558947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.559234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.559268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.663 qpair failed and we were unable to recover it. 00:27:30.663 [2024-11-20 17:21:48.559414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.663 [2024-11-20 17:21:48.559446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.559724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.559757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.560017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.560303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.560337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.560631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.560664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.560868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.560900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.561057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.561088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.561383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.561634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.561665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.561974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.562260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.562572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.562605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.562814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.562847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.563044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.563076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.563276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.563563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.563595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.563845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.564105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.564137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.564424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.564457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.564642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.564674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.564959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.564990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.565242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.565276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.565550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.565582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.565866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.566224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.566500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.566543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.566769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.566810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.567128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.567170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.567407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.567653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.567685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.567868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.567899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.568174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.568214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.568423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.568456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.568606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.568639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.568918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.568949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.569251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.569285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.664 [2024-11-20 17:21:48.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.664 [2024-11-20 17:21:48.569524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.664 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.569806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.569838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.570120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.570152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.570438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.570472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.570804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.571100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.571132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.571386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.571420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.571693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.571725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.572009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.572042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.572216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.572434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.572467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.572725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.573016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.573048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.573324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.573358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.573556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.573588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.573873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.573905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.574218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.574263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.574468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.574499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.574778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.574811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.574966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.574999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.575283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.575317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.575617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.575649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.575935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.576171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.576213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.576472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.576505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.576716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.576748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.576950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.576982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.577262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.577297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.577603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.577634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.577926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.578242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.578277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.578556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.578587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.578811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.578844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.579129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.579161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.579444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.579478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.579686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.579718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.579903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.579935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.580126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.580158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.665 [2024-11-20 17:21:48.580457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.665 [2024-11-20 17:21:48.580490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.665 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.580761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.580793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.580990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.581311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.581345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.581624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.581657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.581896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.581927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.582130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.582163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.582462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.582495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.582782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.582815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.583099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.583131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.583285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.583318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.583602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.583633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.583855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.583886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.584145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.584177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.584488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.584521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.584749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.584782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.585038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.585325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.585359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.585558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.585596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.585801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.585833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.586118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.586150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.586487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.586750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.586783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.587079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.587111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.587388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.587422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.587711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.587743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.588023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.588056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.588341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.588375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.588564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.588595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.588800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.588832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.589105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.589139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.589344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.589377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.589643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.589675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.589801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.666 [2024-11-20 17:21:48.589833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.666 qpair failed and we were unable to recover it. 00:27:30.666 [2024-11-20 17:21:48.590112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.590144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.590423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.590457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.590667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.590963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.590996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.591291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.591325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.591509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.591674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.591707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.591983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.592014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.592200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.592242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.592441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.592473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.592700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.592731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.592942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.592974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.593243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.593276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.593558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.593591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.593775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.593807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.594082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.594114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.594436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.594640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.594672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.594973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.595006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.595219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.595253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.595528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.595559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.595728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.596223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.596258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.596480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.596518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.596650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.596682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.596970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.597178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.597221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.597424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.597456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.597733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.597765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.598035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.598066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.598367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.598401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.598678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.598710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.598951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.599216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.599248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.599550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.599583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.599814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.667 [2024-11-20 17:21:48.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.667 [2024-11-20 17:21:48.600128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.667 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.600357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.600390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.600606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.600638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.600824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.600855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.601131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.601162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.601488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.601522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.601802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.601834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.602021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.602052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.602241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.602276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.602482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.602514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.602702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.602735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.602931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.602962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.603248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.603282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.603515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.603548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.603861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.603893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.604200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.604243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.604523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.604556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.604837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.604870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.605155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.605187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.605472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.605790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.605822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.606090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.606123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.606280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.606314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.606571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.606603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.606822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.606855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.607132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.607164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.607403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.607436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.607738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.607776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.608040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.608073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.608257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.608291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.608542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.608873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.608906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.609088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.609120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.609405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.609439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.609719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.609752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.610054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.610087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.610359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.610393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.610672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.610705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.610982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.668 [2024-11-20 17:21:48.611014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.668 qpair failed and we were unable to recover it. 00:27:30.668 [2024-11-20 17:21:48.611296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.611330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.611521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.611554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.611759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.611792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.611921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.611953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.612164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.612197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.612529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.612563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.612893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.613146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.613178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.613476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.613511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.613734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.613765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.614023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.614055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.614337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.614371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.614656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.614688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.614836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.614869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.615170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.615211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.615511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.615543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.615832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.615864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.616135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.616168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.616469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.616503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.616690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.616723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.617017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.617050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.617278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.617311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.617520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.617552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.617752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.617784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.618046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.618078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.618363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.618397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.618677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.618709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.618995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.619028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.619312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.619351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.619554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.619586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.619793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.619825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.620104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.620135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.620435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.620671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.620703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.620920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.620953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.621087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.621119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.621423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.621457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.669 [2024-11-20 17:21:48.621729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.669 qpair failed and we were unable to recover it. 00:27:30.669 [2024-11-20 17:21:48.621955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.621987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.622219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.622253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.622541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.622573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.622866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.622899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.623170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.623212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.623497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.623530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.623801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.623835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.624051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.624083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.624283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.624317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.624599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.624631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.624905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.625239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.625273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.625472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.625504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.625800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.625833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.626137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.626169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.626470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.626504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.626770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.626802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.627106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.627139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.627336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.627370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.627626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.627658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.627937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.627969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.628157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.628190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.628428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.628461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.628765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.628797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.629064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.629294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.629328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.629607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.629640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.629919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.629951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.630152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.630185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.630478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.630511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.630780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.630824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.631133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.631165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.631406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.631440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.631646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.631679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.631937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.631968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.632167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.632200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.632484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.632517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.670 [2024-11-20 17:21:48.632703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.670 [2024-11-20 17:21:48.632735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.670 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.632987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.633020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.633243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.633278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.633491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.633722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.633753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.634055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.634088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.634350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.634384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.634576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.634608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.634860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.634892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.635173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.635212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.635494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.635527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.635679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.635711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.635965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.636181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.636222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.636456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.636488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.636690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.637012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.637044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.637179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.637220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.637504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.637536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.637750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.637784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.638075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.638108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.638292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.638326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.638475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.638507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.638789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.638821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.639074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.639106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.639316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.639350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.639649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.639681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.639882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.640188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.640230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.640428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.640461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.640646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.640678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.640958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.640991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.641262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.641296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.641613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.641650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.671 qpair failed and we were unable to recover it. 00:27:30.671 [2024-11-20 17:21:48.641932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.671 [2024-11-20 17:21:48.641965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.642226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.642259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.642479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.642511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.642815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.642846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.643134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.643166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.643379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.643412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.643689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.643721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.644007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.644039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.644261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.644295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.644575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.644607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.644837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.644870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.645150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.645183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.645503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.645782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.646000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.646032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.646290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.646324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.646611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.646948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.646980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.647244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.647277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.647510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.647541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.647903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.648171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.648508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.648541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.648805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.648837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.649137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.649169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.649480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.649513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.649788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.649821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.650003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.650035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.650241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.650275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.650543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.650575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.650868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.650900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.651109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.651141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.651404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.651438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.651707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.651739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.652027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.652058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.652211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.652243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.652525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.652558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.652857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.652888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.672 qpair failed and we were unable to recover it. 00:27:30.672 [2024-11-20 17:21:48.653158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.672 [2024-11-20 17:21:48.653190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.653461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.653500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.653795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.653826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.654091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.654123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.654329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.654362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.654643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.654675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.654977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.655162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.655194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.655506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.655709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.655741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.656021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.656052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.656331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.656364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.656573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.656605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.656912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.656943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.657141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.657172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.657357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.657390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.657580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.657611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.657868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.658058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.658090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.658294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.658329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.673 [2024-11-20 17:21:48.658520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.673 [2024-11-20 17:21:48.658552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.673 qpair failed and we were unable to recover it. 00:27:30.956 [2024-11-20 17:21:48.658832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.956 [2024-11-20 17:21:48.658865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.956 qpair failed and we were unable to recover it. 00:27:30.956 [2024-11-20 17:21:48.659051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.956 [2024-11-20 17:21:48.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.956 qpair failed and we were unable to recover it. 00:27:30.956 [2024-11-20 17:21:48.659286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.956 [2024-11-20 17:21:48.659320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.956 qpair failed and we were unable to recover it. 00:27:30.956 [2024-11-20 17:21:48.659550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.956 [2024-11-20 17:21:48.659582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.956 qpair failed and we were unable to recover it. 00:27:30.956 [2024-11-20 17:21:48.659801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.659833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.660100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.660427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.660460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.660776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.660809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.661087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.661404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.661439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.661719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.661752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.662040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.662071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.662354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.662674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.662706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.662988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.663020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.663308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.663341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.663553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.663585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.663873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.663905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.664183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.664224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.664505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.664538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.664817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.664855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.665136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.665169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.665454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.665489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.665768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.665799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.666088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.666120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.666371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.666406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.666599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.666632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.666761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.666794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.667070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.667102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.667385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.667419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.667703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.667734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.668013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.668046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.668274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.668307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.668594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.668836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.668868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.669127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.669159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.669490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.669524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.669808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.669840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.670071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.670103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.670387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.670421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.670703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.957 [2024-11-20 17:21:48.670735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.957 qpair failed and we were unable to recover it. 00:27:30.957 [2024-11-20 17:21:48.670933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.671220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.671254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.671394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.671426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.671706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.671738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.671922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.671954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.672232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.672265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.672504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.672537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.672847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.673071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.673103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.673406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.673439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.673706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.673739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.673946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.673978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.674236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.674270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.674564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.674597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.674870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.674902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.675160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.675192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.675446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.675706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.675739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.676017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.676050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.676257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.676297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.676585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.676617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.676916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.677149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.677181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.677502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.677534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.677857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.677890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.678198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.678471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.678504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.678805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.678837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.679101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.679133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.679436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.679470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.679673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.679705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.679845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.679877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.680155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.680187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.680468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.680500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.680693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.680725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.681005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.681037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.681306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.958 [2024-11-20 17:21:48.681340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.958 qpair failed and we were unable to recover it. 00:27:30.958 [2024-11-20 17:21:48.681613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.681646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.681941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.681973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.682189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.682231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.682453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.682485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.682682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.682714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.683024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.683056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.683314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.683348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.683653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.683685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.683870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.683902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.684057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.684089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.684317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.684350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.684547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.684580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.684837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.684869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.685052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.685083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.685374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.685685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.685717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.685973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.686005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.686135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.686168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.686458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.686492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.686789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.686822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.687092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.687124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.687395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.687429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.687720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.687763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.688038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.688070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.688255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.688290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.688570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.688602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.688836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.688869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.689114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.689146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.689307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.689343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.689602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.689635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.689842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.689874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.690150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.690182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.690396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.690429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.690704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.690736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.691023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.691058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.691336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.691371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.691660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.691693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.691991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.959 [2024-11-20 17:21:48.692024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.959 qpair failed and we were unable to recover it. 00:27:30.959 [2024-11-20 17:21:48.692236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.692271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.692494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.692526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.692729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.692762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.692970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.693002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.693197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.693242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.693500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.693533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.693793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.693825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.694128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.694160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.694402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.694623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.694654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.694959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.694992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.695256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.695290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.695576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.695608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.695872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.695904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.696180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.696221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.696497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.696528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.696737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.696768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.696953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.696985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.697270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.697303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.697500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.697532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.697739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.697771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.698041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.698073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.698358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.698391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.698595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.698628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.698898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.698935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.699084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.699116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.699328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.699362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.699669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.699700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.699973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.700005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.700241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.700275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.700584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.700616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.700820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.700851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.701139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.701171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.701456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.701488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.701694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.701727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.702042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.702349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.702382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.960 [2024-11-20 17:21:48.702643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 17:21:48.702675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.960 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.702946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.702979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.703263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.703297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.703517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.703807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.703840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.704027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.704059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.704350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.704384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.704585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.704617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.704906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.704938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.705240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.705275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.705424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.705456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.705734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.705766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.705965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.705997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.706182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.706222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.706488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.706718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.706750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.706984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.707016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.707235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.707269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.707534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.707566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.707698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.707729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.708012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.708043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.708176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.708229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.708515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.708831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.708863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.709048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.709081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.709335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.709369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.709673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.709704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.710001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.710034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.710335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.710369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.710637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 17:21:48.710669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.961 qpair failed and we were unable to recover it. 00:27:30.961 [2024-11-20 17:21:48.710884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.711191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.711235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.711497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.711529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.711806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.711838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.712036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.712069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.712335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.712648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.712680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.712964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.712996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.713280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.713313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.713593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.713626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.713917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.713949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.714223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.714256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.714569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.714601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.714823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.714855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.715068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.715100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.715378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.715411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.715698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.715730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.715866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.715899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.716129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.716161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.716407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.716441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.716745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.716777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.717040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.717072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.717338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.717373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.717570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.717602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.717806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.717843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.718122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.718155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.718390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.718424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.718681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.718923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.718955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.719147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.719178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.719444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.719476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.719673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.719705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.719984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.720258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.720292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.720562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.720877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.720909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.721197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.721253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.962 [2024-11-20 17:21:48.721442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 17:21:48.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.962 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.721758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.721791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.721993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.722333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.722367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.722501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.722534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.722738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.722771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.723048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.723080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.723284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.723317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.723620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.723652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.723916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.723949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.724175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.724422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.724454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.724734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.724765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.725067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.725099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.725369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.725403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.725673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.725705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.725981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.726013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.726162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.726194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.726479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.726513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.726712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.726744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.727047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.727078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.727346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.727380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.727528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.727560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.727816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.727847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.728124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.728155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.728463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.728496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.728759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.728791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.729127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.729410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.729444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.729725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.729758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.730017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.730049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.730263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.730297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.730549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.730581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.730801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.730833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.731026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.731057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.731378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.731412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.731695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.731727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.732005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.732037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.963 qpair failed and we were unable to recover it. 00:27:30.963 [2024-11-20 17:21:48.732327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.963 [2024-11-20 17:21:48.732362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.732640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.732672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.732883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.732915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.733224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.733257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.733444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.733476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.733710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.733743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.734051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.734274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.734307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.734616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.734647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.734851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.734883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.735164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.735196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.735394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.735426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.735635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.735667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.735944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.735976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.736179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.736237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.736523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.736555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.736765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.736797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.737082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.737115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.737420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.737454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.737731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.737764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.738053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.738086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.738366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.738400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.738687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.738718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.739003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.739034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.739322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.739355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.739504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.739536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.739795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.739828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.740096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.740128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.740411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.740445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.740673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.740986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.741018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.741280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.741315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.741620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.741654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.741858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.742098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.742131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.742408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.742442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.742652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.742684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.742811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.742843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.964 qpair failed and we were unable to recover it. 00:27:30.964 [2024-11-20 17:21:48.743024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.964 [2024-11-20 17:21:48.743055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.743335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.743369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.743693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.743963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.744249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.744282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.744598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.744876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.744909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.745131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.745163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.745383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.745417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.745700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.745732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.746035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.746067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.746353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.746387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.746619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.746651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.746887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.746920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.747198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.747239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.747389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.747421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.747735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.747918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.747950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.748246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.748281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.748588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.748620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.748933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.749188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.749228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.749511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.749762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.749794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.749999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.750031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.750167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.750198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.750428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.750460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.750715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.750747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.751015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.751345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.751378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.751669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.751701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.751927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.751965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.752150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.752183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.752494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.752527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.752815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.752848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.753128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.753161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.753448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.753481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.753762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.965 [2024-11-20 17:21:48.753794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.965 qpair failed and we were unable to recover it. 00:27:30.965 [2024-11-20 17:21:48.753992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.754024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.754303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.754336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.754602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.754634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.754914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.754946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.755233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.755267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.755569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.755600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.755865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.755897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.756200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.756257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.756467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.756499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.756701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.756733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.757012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.757044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.757329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.757363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.757642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.757674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.757881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.757913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.758099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.758131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.758290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.758324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.758646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.758678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.758969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.759001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.759280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.759314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.759598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.759630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.759915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.760158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.760190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.760416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.760449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.760705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.760738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.761011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.761043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.761252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.761286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.761501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.761533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.761812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.761844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.762132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.762164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.762447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.762481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.762762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.762795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.763081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.763113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.763397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.763431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.966 [2024-11-20 17:21:48.763695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.966 [2024-11-20 17:21:48.763733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.966 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.764029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.764062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.764268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.764302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.764613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.764645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.764869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.764901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.765163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.765195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.765537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.765814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.765848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.766104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.766136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.766368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.766412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.766633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.766676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.767000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.767041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.767294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.767330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.767615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.767929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.767962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.768245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.768279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.768518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.768748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.768780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.768986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.769019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.769324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.769358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.769635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.769667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.769925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.769957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.770167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.770210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.770467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.770499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.770707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.770739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.771011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.771045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.771278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.771312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.771574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.771607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.771812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.771845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.772072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.772104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.772406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.772440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.772671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.772704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.772986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.773018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.773307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.773341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.773642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.773675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.773907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.773939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.774223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.774258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.774462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.774494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.967 [2024-11-20 17:21:48.774699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.967 [2024-11-20 17:21:48.774731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.967 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.774988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.775021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.775323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.775363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.775563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.775877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.775909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.776135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.776169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.776347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.776381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.776515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.776547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.776804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.776837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.777124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.777156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.777467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.777502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.777760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.777793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.778100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.778133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.778400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.778434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.778717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.778750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.779029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.779062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.779355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.779390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.779663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.779695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.779843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.779875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.780118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.780150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.780395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.780430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.780631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.780664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.780909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.780942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.781196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.781256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.781540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.781572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.781769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.781801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.782082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.782115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.782378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.782413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.782713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.782746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.783011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.783044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.783334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.783368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.783676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.783963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.783996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.784280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.784314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.784597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.784630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.784908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.785197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.785239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.785534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.968 [2024-11-20 17:21:48.785567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.968 qpair failed and we were unable to recover it. 00:27:30.968 [2024-11-20 17:21:48.785772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.785804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.786001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.786033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.786260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.786294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.786498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.786530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.786735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.786773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.787033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.787066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.787333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.787367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.787662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.787695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.787987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.788019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.788223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.788256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.788396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.788429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.788727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.788760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.789036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.789069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.789281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.789316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.789502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.789534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.789805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.789837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.790117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.790149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.790342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.790376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.790592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.790624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.790903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.790935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.791226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.791261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.791448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.791481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.791784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.791816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.792095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.792128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.792343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.792377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.792665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.792697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.792978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.793011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.793297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.793330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.793610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.793643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.793901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.793934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.794223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.794257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.794555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.794589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.794822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.794855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.795109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.795142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.795372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.795406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.795672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.795705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.796018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.796050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.796321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.969 [2024-11-20 17:21:48.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.969 qpair failed and we were unable to recover it. 00:27:30.969 [2024-11-20 17:21:48.796638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.796873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.796905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.797217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.797251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.797531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.797565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.797772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.797804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.797995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.798298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.798339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.798524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.798556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.798812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.798844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.799127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.799161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.799371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.799404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.799615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.799929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.799961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.800171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.800213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.800499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.800531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.800804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.800836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.801126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.801459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.801493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.801649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.801965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.802222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.802256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.802410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.802442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.802721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.802753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.803057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.803089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.803410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.803691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.803724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.803982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.804274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.804308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.804611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.804643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.804910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.804942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.805195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.805238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.805490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.805528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.805739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.970 [2024-11-20 17:21:48.805772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.970 qpair failed and we were unable to recover it. 00:27:30.970 [2024-11-20 17:21:48.806057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.806091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.806398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.806432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.806693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.806725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.806950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.806983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.807184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.807229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.807429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.807462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.807669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.807701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.807981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.808013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.808268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.808301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.808599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.808632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.808881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.808914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.809226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.809260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.809460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.809493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.809726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.809765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.810021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.810054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.810351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.810598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.810630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.810763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.810796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.811094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.811126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.811411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.811445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.811725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.811758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.812042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.812074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.812361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.812396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.812674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.812707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.813028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.813229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.813519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.813551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.813855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.813888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.814175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.814214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.814506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.814539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.814693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.814726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.814922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.815167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.815493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.815527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.815784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.815816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.816022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.816055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.816315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.816349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.971 [2024-11-20 17:21:48.816621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.971 [2024-11-20 17:21:48.816653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.971 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.816883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.816915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.817192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.817235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.817515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.817548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.817829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.817861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.818148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.818180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.818463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.818496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.818781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.818814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.819093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.819126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.819423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.819457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.819724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.819757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.820081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.820114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.820393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.820427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.820685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.820717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.820986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.821019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.821301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.821335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.821676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.822007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.822268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.822302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.822531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.822563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.822886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.822920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.823148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.823180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.823519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.823553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.823804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.823838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.824106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.824138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.824434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.824471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.824740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.824774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.825047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.825083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.825407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.825687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.825721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.826012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.826044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.826348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.826381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.826594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.826627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.826759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.826792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.827001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.827033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.827240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.827274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.827531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.827564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.827717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.827749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.972 [2024-11-20 17:21:48.827961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.972 [2024-11-20 17:21:48.827993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.972 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.828223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.828257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.828459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.828492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.828703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.828735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.828938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.828970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.829233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.829268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.829518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.829550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.829806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.829839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.830042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.830075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.830362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.830396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.830658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.830691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.830995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.831028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.831224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.831258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.831389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.831422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.831699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.831731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.831948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.831981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.832166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.832198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.832415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.832448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.832727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.832767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.832982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.833015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.833330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.833365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.833652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.833685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.833911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.833944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.834147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.834180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.834305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.834337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.834545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.834578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.834787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.834819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.835021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.835054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.835262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.835295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.835571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.835604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.835863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.835896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.836146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.836178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.836446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.836480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.836777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.836809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.836951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.836984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.837272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.837307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.837515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.837547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.837837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.837870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.973 [2024-11-20 17:21:48.838144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.973 [2024-11-20 17:21:48.838177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.973 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.838431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.838465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.838709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.838970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.839003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.839199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.839242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.839459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.839492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.839776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.839809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.840039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.840072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.840340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.840374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.840609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.840643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.840947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.841226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.841262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.841470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.841503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.841704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.841738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.842017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.842050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.842256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.842290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.842499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.842532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.842745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.842777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.842980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.843013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.843224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.843258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.843542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.843581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.843880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.844098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.844131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.844388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.844422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.844715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.844747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.845023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.845056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.845187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.845449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.845483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.845688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.846010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.846042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.846268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.846302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.846558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.846590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.846867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.846900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.847049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.847081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.847296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.847331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.847551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.847584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.847710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.847743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.847877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.847909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.974 qpair failed and we were unable to recover it. 00:27:30.974 [2024-11-20 17:21:48.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.974 [2024-11-20 17:21:48.848247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.848524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.848558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.848819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.848853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.849053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.849086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.849362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.849401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.849561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.849594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.849794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.849827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.850115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.850279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.850314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.850574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.850758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.850791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.851006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.851038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.851211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.851507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.851540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.851791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.851825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.852119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.852152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.852373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.852407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.852685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.852718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.853051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.853085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.853327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.853361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.853667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.853936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.853970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.854232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.854266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.854578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.854612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.854929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.854963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.855246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.855279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.855510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.855731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.855765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.855949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.855981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.856166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.856199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.856437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.856471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.856624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.856656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.856961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.975 [2024-11-20 17:21:48.856993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.975 qpair failed and we were unable to recover it. 00:27:30.975 [2024-11-20 17:21:48.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.857232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.857496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.857529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.857836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.857869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.858116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.858150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.858378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.858413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.858649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.858681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.858937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.858971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.859193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.859237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.859472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.859506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.859668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.859704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.859907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.859944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.860175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.860219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.860478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.860510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.860694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.860910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.860943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.861180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.861228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.861507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.861546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.861689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.861722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.861859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.861895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.862124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.862156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.862456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.862491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.862755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.862788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.863064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.863098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.863307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.863342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.863652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.863685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.864006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.864040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.864339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.864373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.864530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.864566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.864845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.864879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.865169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.865213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.865431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.865465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.865663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.865695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.866046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.866078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.866290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.866325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.866565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.866779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.866813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.867087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.867120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.976 [2024-11-20 17:21:48.867427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.976 [2024-11-20 17:21:48.867462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.976 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.867612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.867874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.867906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.868161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.868194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.868357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.868392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.868621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.868653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.868781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.868812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.869044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.869078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.869310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.869344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.869497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.869529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.869834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.869869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.870081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.870115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.870310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.870556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.870591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.870882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.870917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.871222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.871493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.871688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.871721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.871989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.872021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.872247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.872288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.872495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.872528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.872749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.872781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.872972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.873004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.873221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.873255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.873463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.873497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.873709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.873741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.873960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.873993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.874192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.874239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.874445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.874478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.874688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.874721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.875047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.875081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.875345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.875382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.875602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.875635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.875914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.876254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.876289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.876541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.876573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.876778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.876811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.877013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.877046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.977 qpair failed and we were unable to recover it. 00:27:30.977 [2024-11-20 17:21:48.877314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.977 [2024-11-20 17:21:48.877348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.877572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.877606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.877870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.877902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.878197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.878243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.878467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.878499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.878650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.878684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.878898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.878931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.879078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.879111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.879407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.879442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.879596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.879628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.879848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.879882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.880137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.880170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.880453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.880488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.880703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.880740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.880991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.881025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.881336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.881371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.881570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.881605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.881881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.881916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.882217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.882252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.882460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.882493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.882697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.882729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.882913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.882951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.883234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.883268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.883485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.883517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.883716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.883748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.883953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.883987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.884196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.884239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.884450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.884483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.884766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.884799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.885006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.885038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.885234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.885269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.885483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.885517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.885716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.885748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.886030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.886063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.886360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.886523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.886556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.886836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.886870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.887127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.978 [2024-11-20 17:21:48.887160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.978 qpair failed and we were unable to recover it. 00:27:30.978 [2024-11-20 17:21:48.887397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.887431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.887634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.887666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.887864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.887898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.888179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.888238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.888385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.888418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.888675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.888708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.888917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.888950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.889213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.889249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.889531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.889564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.889709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.889748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.890042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.890075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.890369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.890403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.890621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.890654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.890875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.890907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.891126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.891159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.891378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.891412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.891671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.891704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.891995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.892028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.892349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.892382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.892529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.892844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.892875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.893141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.893175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.893393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.893426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.893685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.893724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.893965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.893998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.894157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.894188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.894479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.894512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.894792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.894825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.895068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.895293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.895327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.895509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.895542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.895675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.896028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.896061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.979 qpair failed and we were unable to recover it. 00:27:30.979 [2024-11-20 17:21:48.896307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.979 [2024-11-20 17:21:48.896340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.896587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.896873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.897178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.897221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.897505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.897539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.897742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.897774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.898054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.898086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.898315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.898348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.898555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.898587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.898801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.898833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.899029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.899062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.899333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.899368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.899596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.899629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.899881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.899914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.900118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.900150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.900357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.900390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.900618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.900650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.900846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.900880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.901070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.901103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.901402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.901437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.901624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.901657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.901960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.901993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.902260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.902293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.902628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.902889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.902921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.903228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.903263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.903416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.903449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.903664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.903696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.903923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.903956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.904175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.904242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.904447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.904480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.904712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.904745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.904964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.904996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.905313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.905349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.905572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.905605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.905731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.905763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.906043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.980 [2024-11-20 17:21:48.906076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.980 qpair failed and we were unable to recover it. 00:27:30.980 [2024-11-20 17:21:48.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.906399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.906677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.906710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.906986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.907333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.907367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.907503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.907536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.907804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.908038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.908072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.908343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.908380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.908568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.908602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.908796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.908828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.909107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.909139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.909374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.909408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.909618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.909651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.909874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.909907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.910114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.910146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.910366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.910400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.910609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.910642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.910944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.911132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.911166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.911385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.911421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.911633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.911666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.911809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.911842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.912142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.912176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.912406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.912440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.912653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.912686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.912915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.912948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.913215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.913250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.913443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.913475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.913609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.913642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.913848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.913880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.914163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.914196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.914413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.914446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.914653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.914691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.914896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.914929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.915184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.915231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.915521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.915554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.915700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.915733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.916038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.981 [2024-11-20 17:21:48.916071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.981 qpair failed and we were unable to recover it. 00:27:30.981 [2024-11-20 17:21:48.916310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.916346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.916488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.916521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.916667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.916699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.916954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.916988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.917230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.917265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.917524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.917784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.917817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.917951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.917984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.918260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.918294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.918549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.918583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.918767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.918801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.919028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.919061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.919626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.919660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.919825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.919858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.920068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.920101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.920301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.920335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.920492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.920525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.920782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.920814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.921014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.921048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.921271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.921429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.921462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.921593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.921625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.921775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.921807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.922065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.922098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.922311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.922346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.922532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.922565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.922796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.922830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.923141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.923317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.923352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.923510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.923542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.923745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.923777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.924095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.924128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.924333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.924368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.924598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.924638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.924870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.924905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.925050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.925082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.982 [2024-11-20 17:21:48.925291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.982 [2024-11-20 17:21:48.925327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.982 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.925533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.925567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.925778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.925812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.926092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.926126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.926337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.926372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.926584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.926617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.926822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.927111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.927145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.927286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.927320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.927533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.927567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.927797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.927832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.928115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.928148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.928327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.928362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.928511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.928544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.928737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.928987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.929020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.929251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.929285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.929457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.929630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.929766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.929798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.930055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.930088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.930310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.930345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.930555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.930588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.930723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.930756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.931036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.931070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.931385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.931418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.931584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.931616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.931749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.932008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.932040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.932332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.932367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.932523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.932556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.932756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.932789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.932998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.933032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.933242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.933408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.933442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.933580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.933613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.933743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.934031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.983 [2024-11-20 17:21:48.934305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.983 [2024-11-20 17:21:48.934340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.983 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.934578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.934611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.934853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.935052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.935085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.935347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.935383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.935613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.935646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.935842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.935876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.936083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.936117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.936311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.936346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.936541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.936575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.936715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.936749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.936881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.936915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.937118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.937153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.937406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.937442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.937631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.937663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.937882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.938084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.938118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.938263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.938298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.938433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.938466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.938621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.938655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.938861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.938894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.939009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.939043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.939179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.939226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.939450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.939484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.939605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.939638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.939828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.939861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.940925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.941125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.941158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.941287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.941322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.984 qpair failed and we were unable to recover it. 00:27:30.984 [2024-11-20 17:21:48.941452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.984 [2024-11-20 17:21:48.941486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.941622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.941655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.941843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.942067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.942101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.942301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.942336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.942604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.942740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.942773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.943047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.943081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.943213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.943248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.943379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.943413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.943671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.943704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.943908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.944081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.944114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.944254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.944289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.944511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.944544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.944745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.944778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.944914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.944948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.945078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.945112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.945316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.945351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.945560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.945916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.946075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.946109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.946248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.946284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.946438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.946479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.946765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.946799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.946933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.946967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.947102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.947135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.947403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.947437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.947567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.947601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.947802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.947841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.947979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.948024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.948177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.948221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.948430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.948465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.948588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.948621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.948776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.948809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.949023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.949058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.949259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.949295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.949435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.985 [2024-11-20 17:21:48.949468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.985 qpair failed and we were unable to recover it. 00:27:30.985 [2024-11-20 17:21:48.949691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.949945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.949979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.950186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.950490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.950523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.950738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.950771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.950910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.950942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.951197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.951245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.951433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.951472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.951703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.951735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.951867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.951900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.952037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.952071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.952259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.952295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.952440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.952472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.952595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.952817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.952849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.953004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.953036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.953275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.953309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.953528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.953561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.953688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.953720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.953909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.953943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.954130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.954162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.954372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.954500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.954533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.954653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.954685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.954870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.954902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.955098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.955130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.955296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.955330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.955525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.955558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.955682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.955714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.955898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.955930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.956057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.956089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.956520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.956552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.956752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.956784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.956983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.957016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.957219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.957253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.957394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.957426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.957689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.986 [2024-11-20 17:21:48.957721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.986 qpair failed and we were unable to recover it. 00:27:30.986 [2024-11-20 17:21:48.957995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.958028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.958185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.958228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.958376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.958410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.958614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.958647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.958782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.958815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.959072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.959107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.959240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.959273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.959419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.959451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.959608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.959728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.959766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.960023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.960056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.960195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.960257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.960462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.960495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.960654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.960686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.960961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.960993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.961226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.961260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.961403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.961436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.961625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.961657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.961848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.961880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.962077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.962254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.962289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.962476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.962508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.962647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.962679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.962964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.962996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.963187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.963230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.963441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.963611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.963643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.963778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.963810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.964037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.964165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.964216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.964406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.964438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.964574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.964606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.964804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.965020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.965053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.965191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.965234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.965347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.965380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.965582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.965615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.987 [2024-11-20 17:21:48.965820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.987 [2024-11-20 17:21:48.965853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.987 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.966049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.966082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.966200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.966251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.966477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.966519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.966679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.966720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.966944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.966986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.967117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.967159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.967437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.967472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.967600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.967633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.967754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.967786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.967971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.968004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.968255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.968290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.968453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.968491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.968624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.968656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.968796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.968829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.969042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.969222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.969448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.969617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.969853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.969979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.970133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.970289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.970430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.970619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.970931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.971898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.971929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.972072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.972104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.972298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.972333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.972462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.972494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:30.988 [2024-11-20 17:21:48.972617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.988 [2024-11-20 17:21:48.972649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:30.988 qpair failed and we were unable to recover it. 00:27:31.264 [2024-11-20 17:21:48.972872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.264 [2024-11-20 17:21:48.972905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.264 qpair failed and we were unable to recover it. 00:27:31.264 [2024-11-20 17:21:48.973031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.973062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.973176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.973239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.973391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.973423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.973609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.973642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.973833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.973865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.973997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.974029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.974232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.974267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.974482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.974514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.974706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.974738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.974879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.974912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.975040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.975072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.975322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.975356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.975477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.975509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.975651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.975683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.975892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.975924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.976127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.976166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.976367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.976401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.976609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.976642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.976909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.976941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.977138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.977429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.977465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.977670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.265 [2024-11-20 17:21:48.977703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.265 qpair failed and we were unable to recover it. 00:27:31.265 [2024-11-20 17:21:48.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.978001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.978180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.978246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.978400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.978434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.978582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.978614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.978749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.978781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.979011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.979045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.979243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.979278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.979474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.979506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.979715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.979747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.979943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.980173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.980216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.980373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.980404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.980550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.980583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.980784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.981029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.981061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.981268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.981301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.981495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.981527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.981668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.981700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.982028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.982059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.982339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.982373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.266 qpair failed and we were unable to recover it. 00:27:31.266 [2024-11-20 17:21:48.982524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.266 [2024-11-20 17:21:48.982561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.982762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.982795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.983018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.983050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.983369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.983403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.983652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.983685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.983944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.983977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.984254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.984288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.984497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.984530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.984677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.984710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.984988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.985303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.985337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.985600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.985878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.267 [2024-11-20 17:21:48.986127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.267 [2024-11-20 17:21:48.986158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.267 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.986372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.986407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.986553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.986584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.986741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.987039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.987071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.987281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.987316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.987604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.987636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.987835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.987867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.988065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.988098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.988312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.988345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.988544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.988577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.988790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.988822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.988953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.988986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.268 qpair failed and we were unable to recover it. 00:27:31.268 [2024-11-20 17:21:48.989175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.268 [2024-11-20 17:21:48.989216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.989422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.989454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.989646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.989965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.990103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.990134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.990332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.990367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.990575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.990607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.990756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.990788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.990993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.991302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.991336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.991534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.991567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.991770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.991802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.992054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.992087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.992392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.992426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.992644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.992682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.269 qpair failed and we were unable to recover it. 00:27:31.269 [2024-11-20 17:21:48.992987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.269 [2024-11-20 17:21:48.993020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.993151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.993182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.993398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.993432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.993682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.993715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.994031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.994063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.994267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.994300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.994508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.994540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.994764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.994797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.995100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.995132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.995353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.995387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.270 [2024-11-20 17:21:48.995616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.270 [2024-11-20 17:21:48.995648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.270 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.995847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.995879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.996136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.996169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.996456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.996489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.996634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.996666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.996822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.996854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.997105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.997137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.997357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.997391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.997535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.997567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.997708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.997741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.998042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.998074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.998275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.998308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.998454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.998486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.998793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.271 [2024-11-20 17:21:48.998994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.271 [2024-11-20 17:21:48.999026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.271 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:48.999218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:48.999252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:48.999462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:48.999494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:48.999761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:48.999795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:48.999945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:48.999977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.000267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.000302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.000447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.000480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.000682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.000714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.000857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.000890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.001113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.001146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.001363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.001396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.001532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.001565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.001897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.001930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.002156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.272 [2024-11-20 17:21:49.002188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.272 qpair failed and we were unable to recover it. 00:27:31.272 [2024-11-20 17:21:49.002375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.002408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.002613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.002655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.002856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.002888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.003166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.003199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.003417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.003451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.003652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.003684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.003914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.003946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.004195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.004241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.004451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.004484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.004694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.004726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.005029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.005061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.005305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.005340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.005500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.005531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.005689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.005722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.006065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.006098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.006311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.006346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.006480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.006513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.273 qpair failed and we were unable to recover it. 00:27:31.273 [2024-11-20 17:21:49.006634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.273 [2024-11-20 17:21:49.006667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.006784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.006817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.007124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.007157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.007314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.007348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.007515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.007548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.007704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.007736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.007932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.007965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.008151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.008183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.008449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.008482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.008621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.008654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.008872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.009176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.009220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.009432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.009465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.009792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.274 [2024-11-20 17:21:49.010032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.274 [2024-11-20 17:21:49.010065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.274 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.010340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.010375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.010585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.010616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.010765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.010797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.011076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.011108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.011358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.011393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.011522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.011554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.011740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.011772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.012054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.012086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.012363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.012397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.012643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.012980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.013013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.013245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.013279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.013538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.013572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.275 [2024-11-20 17:21:49.013902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.275 [2024-11-20 17:21:49.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.275 qpair failed and we were unable to recover it. 00:27:31.276 [2024-11-20 17:21:49.014153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.276 [2024-11-20 17:21:49.014185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-11-20 17:21:49.014488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.276 [2024-11-20 17:21:49.014522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-11-20 17:21:49.014730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.276 [2024-11-20 17:21:49.014763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-11-20 17:21:49.015052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.276 [2024-11-20 17:21:49.015084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.276 qpair failed and we were unable to recover it. 00:27:31.276 [2024-11-20 17:21:49.015296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.015332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.015532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.015564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.015843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.015876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.016137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.016170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.016414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.016447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.016650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.016682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.016887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.016919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.017129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.017369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.017403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.017835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.017868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.277 [2024-11-20 17:21:49.018091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.277 [2024-11-20 17:21:49.018123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.277 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.018338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.018372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.018630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.018662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.019082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.019114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.019390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.019425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.019704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.019736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.020022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.020054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.020266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.020512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.020544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.020711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.020744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.020994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.021028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.278 qpair failed and we were unable to recover it. 00:27:31.278 [2024-11-20 17:21:49.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.278 [2024-11-20 17:21:49.021357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.021644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.021676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.021998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.022224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.022258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.022467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.022499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.022715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.022748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.023041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.023074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.023330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.023364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.023552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.023584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.023876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.024124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.024157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.024326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.024361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.024616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.024649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.024908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.024941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.279 [2024-11-20 17:21:49.025140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.279 [2024-11-20 17:21:49.025173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.279 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.025459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.025493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.025791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.025825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.026011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.026044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.026291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.026325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.026514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.026672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.026704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.026972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.280 [2024-11-20 17:21:49.027003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.280 qpair failed and we were unable to recover it. 00:27:31.280 [2024-11-20 17:21:49.027198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.027248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.027474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.027507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.027643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.027676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.027842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.027874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.028159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.028373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.281 [2024-11-20 17:21:49.028407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.281 qpair failed and we were unable to recover it. 00:27:31.281 [2024-11-20 17:21:49.028569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.028901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.028934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.029226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.029260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.029445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.029478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.029700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.029733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.029963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.030182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.030226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.030432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.030465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.030703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.030736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.030977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.031010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.031307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.031341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.031544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.282 [2024-11-20 17:21:49.031577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.282 qpair failed and we were unable to recover it. 00:27:31.282 [2024-11-20 17:21:49.031718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.031750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.031949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.032237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.032272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.032455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.032657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.032689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.033014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.033046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.033253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.033287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.283 [2024-11-20 17:21:49.033448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.283 [2024-11-20 17:21:49.033480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.283 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.033638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.033670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.033909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.033948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.034249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.034284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.034433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.034465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.034681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.034935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.034967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.035269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.035304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.284 qpair failed and we were unable to recover it. 00:27:31.284 [2024-11-20 17:21:49.035511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.284 [2024-11-20 17:21:49.035543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.035686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.035717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.036016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.036047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.036217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.036363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.036395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.036626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.036658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.036937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.036970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.037152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.037184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.037461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.037494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.037703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.285 [2024-11-20 17:21:49.037736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.285 qpair failed and we were unable to recover it. 00:27:31.285 [2024-11-20 17:21:49.037959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.286 [2024-11-20 17:21:49.037991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.286 qpair failed and we were unable to recover it. 00:27:31.286 [2024-11-20 17:21:49.038296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.286 [2024-11-20 17:21:49.038330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.286 qpair failed and we were unable to recover it. 00:27:31.286 [2024-11-20 17:21:49.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.286 [2024-11-20 17:21:49.038570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.286 qpair failed and we were unable to recover it. 00:27:31.286 [2024-11-20 17:21:49.038900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.286 [2024-11-20 17:21:49.038933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.286 qpair failed and we were unable to recover it. 00:27:31.286 [2024-11-20 17:21:49.039139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.286 [2024-11-20 17:21:49.039171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.039351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.039384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.039532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.039564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.039848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.039880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.040154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.040186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.040406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.040439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.040646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.287 [2024-11-20 17:21:49.040678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.287 qpair failed and we were unable to recover it. 00:27:31.287 [2024-11-20 17:21:49.040908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.040941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.041171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.041215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.041442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.041475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.041686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.041718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.041923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.041955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.042248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.042282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.042556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.042588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.042928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.288 qpair failed and we were unable to recover it. 00:27:31.288 [2024-11-20 17:21:49.043170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.288 [2024-11-20 17:21:49.043215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.043477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.043509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.043707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.043739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.044030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.044063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.044347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.044382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.044611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.044649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.044961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.044994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.045271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.045305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.045504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.045537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.045746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.289 [2024-11-20 17:21:49.045778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.289 qpair failed and we were unable to recover it. 00:27:31.289 [2024-11-20 17:21:49.046022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.046055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.046335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.046370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.046608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.046641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.046872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.046905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.047127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.047159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.047433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.047467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.047739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.048001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.048034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.048177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.048216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.048417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.290 [2024-11-20 17:21:49.048449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.290 qpair failed and we were unable to recover it. 00:27:31.290 [2024-11-20 17:21:49.048601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.048634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.048930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.048962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.049168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.049200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.049418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.049451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.049684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.049716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.049973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.050006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.050283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.050317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.050517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.050551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.050745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.050777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.050919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.050952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.051217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.051252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.051536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.051568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.051627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2af0 (9): Bad file descriptor 00:27:31.291 [2024-11-20 17:21:49.052110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.052187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.052440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.052478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.052701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.052733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.053037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.053068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.053221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.053256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.053466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.053498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.053800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.053832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.054058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.054371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.054405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.054546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.054578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.054937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.055184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.055225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.055456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.055489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.055661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.055694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.055919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.055951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.056150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.056182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.056419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.056607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.056639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.056897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.056929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.057082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.057112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.057323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.057356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.057562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.057594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.057780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.057812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.058087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.291 [2024-11-20 17:21:49.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.291 qpair failed and we were unable to recover it. 00:27:31.291 [2024-11-20 17:21:49.058316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.058352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.058560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.058591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.058799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.058837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.059143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.059174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.059336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.059368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.059626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.059658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.059808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.060164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.060400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.060433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.060754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.060987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.061018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.061254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.061416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.061449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.061708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.061741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.061863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.061894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.062093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.062124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.062358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.062392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.062601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.292 [2024-11-20 17:21:49.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.292 qpair failed and we were unable to recover it. 00:27:31.292 [2024-11-20 17:21:49.062941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.062972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.063177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.063216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.063399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.063606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.063638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.063924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.063957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.064161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.064193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.064493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.064524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.065120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.065152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.065439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.065472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.065699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.065730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.065977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.066010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.066144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.066175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.066337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.066369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.066497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.066738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.066769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.067076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.067108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.067367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.067401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.067657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.067688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.067936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.067968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.068227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.068261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.068456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.068487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.068703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.068735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.069028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.069061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.069262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.069301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.069530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.069561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.069770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.069803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.070033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.070064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.070305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.070497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.070528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.070850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.070881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.071063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.071096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.293 [2024-11-20 17:21:49.071317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.293 [2024-11-20 17:21:49.071351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.293 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.071546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.071577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.071704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.071736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.071898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.071931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.072054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.072086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.072280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.072313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.072594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.072625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.072894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.072926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.073146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.073176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.073425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.073457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.073663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.073694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.073927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.073959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.074140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.074171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.074483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.074833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.074866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.075138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.075414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.075449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.075700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.075843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.075874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.076136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.076168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.076472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.076505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.076725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.077090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.077122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.077403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.077436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.077677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.077981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.078013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.078225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.078259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.078414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.078445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.078642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.078674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.294 [2024-11-20 17:21:49.078828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.294 [2024-11-20 17:21:49.078860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.294 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.079128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.079159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.079355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.079388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.079669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.079707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.079949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.079981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.080246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.080511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.080751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.080782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.081071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.081354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.081388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.081615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.081648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.081912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.081944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.082245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.082281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.082807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.082839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.083142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.083175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.083315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.083348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.083565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.083596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.083848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.084142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.084174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.084399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.084433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.084575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.084607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.084814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.084845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.084983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.085014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.085270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.085303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.085512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.085698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.295 [2024-11-20 17:21:49.085730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.295 qpair failed and we were unable to recover it. 00:27:31.295 [2024-11-20 17:21:49.085946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.085978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.086188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.086245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.086399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.086432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.086662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.086693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.086928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.086961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.087107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.087138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.087432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.087466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.087775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.087806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.088000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.088032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.088268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.088302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.088555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.088859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.088891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.089038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.089387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.089680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.089995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.090027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.090226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.090267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.090523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.090556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.090688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.090720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.091013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.091045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.091330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.091362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.091676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.091708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.091912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.091944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.092066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.092096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.092364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.092397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.092560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.092592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.092842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.092873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.093100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.093131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.093333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.093368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.093644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.093675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.093834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.093866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.094145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.094177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.094396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.094428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.094711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.094984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.095016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.095265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.095525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.095558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.095834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.296 [2024-11-20 17:21:49.095866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.296 qpair failed and we were unable to recover it. 00:27:31.296 [2024-11-20 17:21:49.096190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.096233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.096385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.096417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.096600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.096632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.096847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.097090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.097122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.097384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.097418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.097647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.097923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.097956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.098160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.098191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.098502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.098534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.098791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.098822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.099116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.099148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.099405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.099439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.099702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.100009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.100042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.100263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.100497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.100529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.100836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.100867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.101003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.101041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.101340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.101374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.101632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.297 [2024-11-20 17:21:49.101664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.297 qpair failed and we were unable to recover it. 00:27:31.297 [2024-11-20 17:21:49.101942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.101974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-11-20 17:21:49.102257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.102290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-11-20 17:21:49.102542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.102574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-11-20 17:21:49.102827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.102858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-11-20 17:21:49.103123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.103154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-11-20 17:21:49.103410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.298 [2024-11-20 17:21:49.103443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-11-20 17:21:49.103720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.299 [2024-11-20 17:21:49.103751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-11-20 17:21:49.103883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.299 [2024-11-20 17:21:49.103916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-11-20 17:21:49.104174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.104217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.104414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.104445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.104629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.104932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.104964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.105247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.105280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.105536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.105781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.106039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.106070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.106339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.106373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.106583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.300 [2024-11-20 17:21:49.106615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-11-20 17:21:49.106917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.106949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.107187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.107228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.107532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.107564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.107767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.107799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.107988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.108019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.108215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.108248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.108511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.108542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.108675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.108707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.108987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.109019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.109309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.109342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.109526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.109559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.109752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.109783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.109975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.110006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.110241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.110275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.110486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.110518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.110722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.110753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.111079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.111110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.111255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.111288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.111491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.111523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.111671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.111709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.111989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.112299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.112332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.112593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.112624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.112835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.112866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.113054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.113086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.113303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.113338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.113520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.113551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.113757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.113789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.114121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.114153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.114415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.114448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.114659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.114689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.114889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.114920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.115199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.115240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.115554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.115587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.115834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.115866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.116182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.116223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.116418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.116449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.116730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.116763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.116989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.301 [2024-11-20 17:21:49.117020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.301 qpair failed and we were unable to recover it. 00:27:31.301 [2024-11-20 17:21:49.117305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.117339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.117599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.117631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.117909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.117940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.118231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.118264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.118494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.118526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.118673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.118704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.118908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.118939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.119199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.119246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.119588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.119748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.119780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.120040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.120268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.120301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.120579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.120874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.120906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.121161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.121193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.121474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.121507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.121765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.121796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.121934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.121966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.122252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.122286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.122427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.122459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.122800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.123015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.123046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.123250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.123283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.123489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.123521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.123805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.123836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.124040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.124071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.124278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.124311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.124538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.124569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.124781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.124812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.125117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.125149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.125409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.125442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.125700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.125732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.125935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.125967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.126287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.126321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.126537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.126570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.126773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.126804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.127024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.127253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.127287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.127493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.127524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.127732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.127764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.128029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.128061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.128277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.128310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.128522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.128555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.128763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.128794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.129064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.129309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.129342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.129537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.129700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.129732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.130037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.302 qpair failed and we were unable to recover it. 00:27:31.302 [2024-11-20 17:21:49.130364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.302 [2024-11-20 17:21:49.130397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.130625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.130657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.130843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.130873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.131088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.131120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.131281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.131492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.131525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.131676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.131708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.131906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.131937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.132142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.132173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.132393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.132425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.132629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.132660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.132829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.133116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.133317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.133350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.133555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.133587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.133792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.133825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.134009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.134040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.134297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.134330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.134552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.134584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.134742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.134773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.134979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.135010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.135219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.135253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.135449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.135481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.135735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.135766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.136051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.136083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.136257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.136290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.136479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.136511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.136702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.136735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.136883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.136914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.137188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.137234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.137425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.137458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.137670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.137702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.137964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.137995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.138248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.138281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.138480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.138511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.138650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.138681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.138986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.139017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.139234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.139267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.139424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.139456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.139664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.139694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.139918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.139950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.140158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.140192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.140316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.140348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.140549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.140581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.140856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.140888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.141029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.141060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.141303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.141336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.141487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.303 [2024-11-20 17:21:49.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.303 qpair failed and we were unable to recover it. 00:27:31.303 [2024-11-20 17:21:49.141678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.141710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.141997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.142030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.142364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.142583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.142830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.142861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.143141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.143173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.143443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.143680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.143713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.144010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.144042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.144252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.144286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.144547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.144579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.144723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.144753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.145054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.145086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.145311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.145344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.145625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.145656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.145799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.145830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.146034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.146066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.146282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.146317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.146533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.146564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.146794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.146825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.147078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.147110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.147381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.147599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.147630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.147822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.147854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.148134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.148166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.148456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.148490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.148762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.148794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.149085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.149117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.149278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.149311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.149596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.149627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.149956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.150036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.150337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.150374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.150609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.150643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.150990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.151246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.151279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.151485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.151518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.151797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.151830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.152142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.152386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.152420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.152628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.152660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.152902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.304 [2024-11-20 17:21:49.153099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.304 [2024-11-20 17:21:49.153131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.304 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.153413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.153447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.153707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.153755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.154026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.154059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.154330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.154364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.154679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.154712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.155041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.155073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.155223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.155256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.155532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.155564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.155717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.155749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.155948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.155980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.156237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.156270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.156552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.156584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.156916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.156947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.157216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.157250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.157462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.157705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.157737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.157968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.158001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.158258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.158292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.158570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.158601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.158775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.158806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.159031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.159062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.159210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.159243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.159436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.159548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.159580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.159713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.159745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.160044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.160076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.160302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.160336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.160593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.160623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.160923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.161336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.161375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.161635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.161939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.161971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.162214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.162249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.162471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.162502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.162709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.162742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.163004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.163036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.163303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.163335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.163589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.163620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.163826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.163858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.164055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.164088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.164297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.164329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.164608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.164640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.164940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.164973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.165170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.165214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.165509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.165541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.165775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.165807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.165956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.165989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.166194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.166239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.305 qpair failed and we were unable to recover it. 00:27:31.305 [2024-11-20 17:21:49.166510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.305 [2024-11-20 17:21:49.166543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.166763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.166795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.166990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.167021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.167278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.167313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.167475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.167506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.167642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.167674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.167816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.167848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.168095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.168135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.168441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.168476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.168601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.168633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.168943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.168975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.169177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.169217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.169432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.169463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.169674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.170016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.170048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.170333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.170578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.170610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.170761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.170793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.171012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.171047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.171339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.171374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.171632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.171665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.171888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.171921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.172159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.172193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.172347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.172379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.172587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.172620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.172775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.172807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.173101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.173132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.173327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.173361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.173561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.173596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.173743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.173775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.174067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.174100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.174341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.174375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.174666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.174878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.174910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.175179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.175229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.175474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.175507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.175637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.175670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.175985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.176018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.176254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.176288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.176418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.176451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.176603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.176635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.176769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.176801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.177011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.177044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.177306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.177340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.177488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.177520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.177735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.177767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.178026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.178057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.178248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.178283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.178492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.178524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.178763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.178990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.306 [2024-11-20 17:21:49.179022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.306 qpair failed and we were unable to recover it. 00:27:31.306 [2024-11-20 17:21:49.179328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.179361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.179518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.179550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.179678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.179710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.179968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.180001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2655062 Killed "${NVMF_APP[@]}" "$@" 00:27:31.307 [2024-11-20 17:21:49.180247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.180286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.180498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.180726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.180761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:31.307 [2024-11-20 17:21:49.180896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.180928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.181124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.181157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:31.307 [2024-11-20 17:21:49.181404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.307 [2024-11-20 17:21:49.181598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.181632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.307 [2024-11-20 17:21:49.181911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.181948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.182078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.182110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.307 [2024-11-20 17:21:49.182243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.182278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.182486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.182519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.182731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.182764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.182979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.183011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.183199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.183242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.183380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.183412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.183598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.183631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.183850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.183884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.184070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.184111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.184330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.184363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.184519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.184552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.184796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.185022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.185054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.185339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.185524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.185560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.185699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.185732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.185927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.185960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.186173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.186217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.186373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.186405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.186661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.186696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.186948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.186981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.187291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.187324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.187482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.187516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.307 qpair failed and we were unable to recover it. 00:27:31.307 [2024-11-20 17:21:49.187677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.307 [2024-11-20 17:21:49.187709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.187905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.188168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.188211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.188397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.188429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.188577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.188726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.188760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.189003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.189035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.189232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 [2024-11-20 17:21:49.189487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.308 [2024-11-20 17:21:49.189519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.308 qpair failed and we were unable to recover it. 00:27:31.308 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2655805 00:27:31.308 [2024-11-20 17:21:49.189662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.309 [2024-11-20 17:21:49.189695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.309 qpair failed and we were unable to recover it. 00:27:31.309 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2655805 00:27:31.309 [2024-11-20 17:21:49.189976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.309 [2024-11-20 17:21:49.190009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.309 qpair failed and we were unable to recover it. 00:27:31.309 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:31.310 [2024-11-20 17:21:49.190223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.310 [2024-11-20 17:21:49.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.310 qpair failed and we were unable to recover it. 00:27:31.310 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2655805 ']' 00:27:31.310 [2024-11-20 17:21:49.190495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.310 [2024-11-20 17:21:49.190529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.310 qpair failed and we were unable to recover it. 00:27:31.310 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.310 [2024-11-20 17:21:49.190840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.310 [2024-11-20 17:21:49.190874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.310 qpair failed and we were unable to recover it. 00:27:31.310 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.310 [2024-11-20 17:21:49.191057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.191090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.311 [2024-11-20 17:21:49.191335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.191369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 [2024-11-20 17:21:49.191522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.191556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 [2024-11-20 17:21:49.191707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.191741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.311 [2024-11-20 17:21:49.191972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.192006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 [2024-11-20 17:21:49.192129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.192163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.311 [2024-11-20 17:21:49.192455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.311 [2024-11-20 17:21:49.192489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.311 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.192632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.192671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.192804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.192837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.192986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.193018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.193217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.193353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.193386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.193586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.193807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.312 qpair failed and we were unable to recover it. 00:27:31.312 [2024-11-20 17:21:49.194086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.312 [2024-11-20 17:21:49.194120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.194314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.194350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.194488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.194521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.194731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.194765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.195078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.195111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.195383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.195419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.195580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.195613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.313 qpair failed and we were unable to recover it. 00:27:31.313 [2024-11-20 17:21:49.195862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.313 [2024-11-20 17:21:49.195895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.196131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.196443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.196478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.196691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.196724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.197066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.197099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.197297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.197331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.197539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.197572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.197723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.197756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.197998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.198031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.198233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.198268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.198422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.198454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.314 [2024-11-20 17:21:49.198668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.314 [2024-11-20 17:21:49.198701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.314 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.199009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.199249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.199291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.199440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.199472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.199676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.199708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.199937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.199968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.200224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.200259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.200444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.200667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.315 [2024-11-20 17:21:49.200700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.315 qpair failed and we were unable to recover it. 00:27:31.315 [2024-11-20 17:21:49.200919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.200953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.201093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.201125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.201458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.201711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.201743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.202014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.202045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.202261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.202295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.316 qpair failed and we were unable to recover it. 00:27:31.316 [2024-11-20 17:21:49.202454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.316 [2024-11-20 17:21:49.202486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.202638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.202672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.202982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.203014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.203281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.203315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.203523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.203715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.203747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.204040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.204072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.204305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.204339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.204537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.204570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.204775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.204808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.205070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.205102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.205364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.205399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.205544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.205576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.205786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.205818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.206017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.206057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.206250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.206285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.206432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.206464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.206721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.206754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.206951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.206983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.207285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.207544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.207577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.207721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.207754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.207979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.208011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.208147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.208180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.208396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.208430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.208656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.208688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.208831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.208864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.209109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.209142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.209506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.209586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.317 [2024-11-20 17:21:49.209802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.317 [2024-11-20 17:21:49.209838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.317 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.210096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.210129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.210408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.210442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.210685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.210942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.210974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.211112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.211143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.211423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.211457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.211590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.211622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.211807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.212033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.212064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.212279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.212312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.212570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.212602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.212877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.212919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.213196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.213242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.213411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.213444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.213586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.213618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.213777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.213809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.214088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.214119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.214311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.214345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.214503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.214761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.214795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.215059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.215090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.215326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.215359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.215506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.215539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.215760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.215791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.215943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.216187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.216232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.216509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.216542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.216743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.216775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.216977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.217009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.217270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.217303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.217462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.318 [2024-11-20 17:21:49.217493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.318 qpair failed and we were unable to recover it. 00:27:31.318 [2024-11-20 17:21:49.217767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.217800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.218001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.218032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.218252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.218285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.218566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.218598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.218808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.218840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.219045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.219077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.219364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.219397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.219671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.219975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.220010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.220232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.220269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.220528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.220560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.220784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.220817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.221015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.221048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.221352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.221385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.221579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.221612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.221903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.221935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.222133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.222166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.222373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.222406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.222615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.222647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.223004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.223309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.223353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.223590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.223622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.223775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.223808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.224068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.224100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.224391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.224426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.224613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.224783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.224816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.225042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.225075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.225272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.225306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.225557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.225791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.225823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.226108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.226139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.226372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.226407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.226606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.226637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.226800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.226831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.227036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.227068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.227272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.227305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.227558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.227589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.227789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.227822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.228042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.228073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.319 qpair failed and we were unable to recover it. 00:27:31.319 [2024-11-20 17:21:49.228307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.319 [2024-11-20 17:21:49.228340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.228596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.228629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.228825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.228856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.229054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.229085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.229336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.229370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.229524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.229555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.229782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.229814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.230116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.230194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.230452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.230491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.230696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.230730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.231020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.231051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.231330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.231364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.231573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.231606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.231817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.231848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.232122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.232155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.232370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.232403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.232543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.232575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.232887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.232920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.233124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.233156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.233376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.233601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.233633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.233886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.233918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.234200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.234243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.234387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.234419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.234724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.234758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.235038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.235070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.235371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.235406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.235661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.235693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.235896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.236075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.236107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.236417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.236451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.236580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.236612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.236796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.236828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.320 [2024-11-20 17:21:49.237035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-11-20 17:21:49.237068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.320 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.237225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.237263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.237549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.237582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.237718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.237953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.237985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.238178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.238218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.238438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.238470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.238679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.238851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.238883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.239079] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:31.321 [2024-11-20 17:21:49.239092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.239124] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.321 [2024-11-20 17:21:49.239126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.239408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.239440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.239718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.239749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.240038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.240071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.240362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.240396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.240690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.240722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.240979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.241011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.241220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.241253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.241532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.241919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.242214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.242248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.242522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.242556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.242714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.242747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.242956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.242988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.243189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.243238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.243495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.243527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.243682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.243714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.244013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.244175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.244232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.244461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.244493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.244724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.244756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.245084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.245116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.245396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.245428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.245632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.245664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.245954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.245984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.246261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.246294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.246441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.246473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.246747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.246779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.321 qpair failed and we were unable to recover it. 00:27:31.321 [2024-11-20 17:21:49.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-11-20 17:21:49.247055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.247216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.247250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.247444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.247476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.247724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.247762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.248070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.248102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.248319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.248353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.248631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.248663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.248930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.248962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.249257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.249290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.249514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.249545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.249811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.250116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.250147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.250417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.250450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.250652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.250684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.250864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.250897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.251107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.251139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.251361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.251394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.251626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.251658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.251788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.251819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.252077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.252109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.252225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.252259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.252513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.252545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.252807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.252839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.253086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.253118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.253317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.253550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.253870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.253902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.254181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.254221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.254437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.254468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.254619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.254652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.254945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.254981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.255195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.255242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.255387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.255419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.255548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.255580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.255832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.255864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.256065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.256096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-11-20 17:21:49.256327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.322 qpair failed and we were unable to recover it. 00:27:31.322 [2024-11-20 17:21:49.256552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.256584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.256766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.256798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.256998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.257030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.257289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.257322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.257517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.257548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.257733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.257765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.257947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.257985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.258177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.258217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.258423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.258455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.258660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.258692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.258859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.258891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.259138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.259170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.259434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.259479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.259648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.259679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.259975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.260006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.260230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.260263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.260496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.260712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.260744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.261001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.261032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.261347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.261380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.261538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.261570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.261785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.261817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.262094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.262125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.262330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.262362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.262578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.262610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.262762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.262793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.263091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.263123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.263342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.263375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.263577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.263609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.263753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.263785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.263991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.264022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.264151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.264183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.264344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.264376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.264596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.264658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.264894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.264928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.265124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.323 [2024-11-20 17:21:49.265155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-11-20 17:21:49.265443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.265477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.265627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.265659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.265790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.265822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.266037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.266226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.266260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.266456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.266487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.266677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.266708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.266822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.266853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.267070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.267102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.267300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.267333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.267483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.267514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.267680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.267712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.267925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.267956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.268228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.268262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.268535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.268565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.268789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.268822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.269037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.269175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.269215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.269489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.269520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.269774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.270023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.270055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.270254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.270287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.270538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.270570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.270782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.270813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.271046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.271079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.271367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.271399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.271618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.271649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.271851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.271882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.272179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.272218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.272495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.272527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.272670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.272701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.272950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.273171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.273210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.273461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.273671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.273701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.273848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.273880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.274097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.274128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.274416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.274456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.274707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.274739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.275032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.275064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-11-20 17:21:49.275279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.324 [2024-11-20 17:21:49.275313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.275511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.275542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.275743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.275776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.276033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.276064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.276294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.276407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.276438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.276633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.276665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.276888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.276920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.277179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.277218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.277359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.277390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.277589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.277620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.277895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.277926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.278132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.278370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.278402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.278675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.278706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.278834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.278866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.279062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.279093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.279339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.279372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.279667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.279699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.279970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.280002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.280248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.280280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.280478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.280509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.280708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.280739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.280936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.280967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.281187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.281469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.281670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.281702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.282042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.282073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.282287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.282321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.282568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.282601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.282817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.282849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.283094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.283125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.283395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.283429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.283717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.283749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.283938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.283970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.284173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.284211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.284412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.284444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.284644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.284681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.284855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.285099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.285326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.285357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.285556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.285588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.285784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.285816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.286026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.286057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.286328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.286360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.286548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.286580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.286739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.287058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.287089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.287276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.287309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-11-20 17:21:49.287451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.325 [2024-11-20 17:21:49.287482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.287658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.287691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.287937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.287969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.288164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.288195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.288432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.288578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.288609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.288911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.288943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.289138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.289170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.289402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.289596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.289631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.289754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.289787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.290055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.290088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.290404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.290440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.290589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.290621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.290844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.290876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.291130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.291188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.291370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.291409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.291611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.291644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.291795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.291827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.292055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.292088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.292224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.292258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.292477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.292510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.292730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.292761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.292967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.292999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.293248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.293282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.293413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.293444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.293634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.293666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.598 [2024-11-20 17:21:49.293858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.598 [2024-11-20 17:21:49.293890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.598 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.294211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.294411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.294443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.294688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.294720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.294946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.294977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.295236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.295269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.295419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.295451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.295636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.295667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.295867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.296186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.296225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.296436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.296570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.296904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.296936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.297131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.297163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.297422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.297455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.297680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.297713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.297989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.298020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.298249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.298434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.298467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.298615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.298646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.298928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.299222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.299433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.299464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.299605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.299637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.299855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.300093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.300125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.300388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.300422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.300670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.300701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.300923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.300960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.301269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.301470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.301503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.301699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.302039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.302071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.302349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.599 [2024-11-20 17:21:49.302382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.599 qpair failed and we were unable to recover it. 00:27:31.599 [2024-11-20 17:21:49.302575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.302607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.302896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.302927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.303051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.303082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.303312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.303502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.303534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.303836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.303867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.304061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.304093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.304349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.304388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.304527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.304559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.304763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.304794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.305058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.305090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.305334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.305366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.305543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.305574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.305716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.305748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.306019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.306050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.306295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.306328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.306593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.306623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.306876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.306908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.307197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.307239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.307498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.307530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.307666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.307697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.307971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.308003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.308246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.308279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.308543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.308574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.308864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.309171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.309212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.309514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.309692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.309724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.310031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.310062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.310340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.310373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.310681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.310714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.310967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.310998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.311122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.311153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.311407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.311439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.311742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.311781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.600 [2024-11-20 17:21:49.312006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.600 qpair failed and we were unable to recover it. 00:27:31.600 [2024-11-20 17:21:49.312308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.312341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.312594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.312625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.312839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.312871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.313089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.313120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.313346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.313623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.313655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.313901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.313932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.314128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.314159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.314443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.314477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.314715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.314746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.315014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.315047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.315342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.315382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.315636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.315667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.315963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.315995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.316277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.316309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.316587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.316618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.316900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.316931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.317221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.317254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.317520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.317734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.317765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.318026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.318058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.318310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.318605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.318635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.318925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.318957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.319124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.601 [2024-11-20 17:21:49.319213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.319252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.319535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.319567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.319754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.319786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.319982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.320014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.320271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.320303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.320432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.320464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.320638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.320670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.320933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.320964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.321238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.321271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.321543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.601 [2024-11-20 17:21:49.321574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.601 qpair failed and we were unable to recover it. 00:27:31.601 [2024-11-20 17:21:49.321776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.321807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.322054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.322086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.322301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.322334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.322587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.322619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.322810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.322842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.323108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.323139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.323419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.323451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.323728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.324036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.324067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.324275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.324306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.324498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.324529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.324794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.324826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.325014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.325045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.325260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.325292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.325597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.325628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.325888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.325920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.326139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.326170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.326436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.326477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.326760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.326793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.327056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.327087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.327380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.327414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.327606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.327835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.327867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.328009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.328041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.328283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.328317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.328521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.328553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.328822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.328854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.329089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.329121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.329395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.329429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.329701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.329733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.330012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.330052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.330638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.330676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.330820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.330853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.331122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.331155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.331461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.331496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.602 [2024-11-20 17:21:49.331647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.602 [2024-11-20 17:21:49.331678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.602 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.331868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.331900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.332163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.332484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.332517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.332701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.332733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.332988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.333019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.333307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.333339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.333581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.333795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.333827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.334115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.334147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.334422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.334454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.334693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.334724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.334987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.335018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.335195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.335234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.335421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.335450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.335721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.335752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.335947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.335978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.336174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.336211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.336400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.336431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.336696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.336727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.336907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.336937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.337256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.337302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.337548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.337579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.337821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.337852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.338111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.338142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.338357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.338390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.338630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.338661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.338858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.338889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.339175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.339216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.339477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.339509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.339739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.339771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.339956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.603 [2024-11-20 17:21:49.339986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.603 qpair failed and we were unable to recover it. 00:27:31.603 [2024-11-20 17:21:49.340171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.340212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.340406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.340437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.340729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.340922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.340953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.341126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.341158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.341350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.341382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.341660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.341691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.341948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.341979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.342232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.342265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.342521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.342552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.342837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.343152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.343183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.343455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.343486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.343710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.343741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.343864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.343895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.344162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.344193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.344492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.344525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.344819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.345069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.345100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.345366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.345399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.345688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.345719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.345921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.345952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.346191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.346234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.346454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.346694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.346725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.346847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.347142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.347173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.347452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.347485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.347766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.347797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.347938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.347975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.348170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.348200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.348390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.348421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.348554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.348584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.604 [2024-11-20 17:21:49.348852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.604 [2024-11-20 17:21:49.348883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.604 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.349168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.349198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.349448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.349480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.349718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.349749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.349929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.349961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.350267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.350553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.350586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.350846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.350876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.351127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.351158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.351453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.351485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.351697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.351728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.351911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.351941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.352182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.352220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.352405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.352436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.352631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.352663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.352899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.352930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.353196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.353236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.353444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.353474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.353737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.353769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.353900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.353931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.354196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.354252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.354541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.354572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.354817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.354849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.355050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.355082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.355346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.355380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.355532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.355745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.355776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.356035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.356067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.356308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.356342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.356554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.356584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.356842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.356874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.357113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.357145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.357417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.357449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.357718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.357750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.357992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.358023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.358212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.358245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.358528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.605 [2024-11-20 17:21:49.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.605 qpair failed and we were unable to recover it. 00:27:31.605 [2024-11-20 17:21:49.358853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.358890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.359166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.359200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.359474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.359505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.359717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.359749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.360003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.360037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.360257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.360294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.360512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.360546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.360851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.361133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.361167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.361441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.361473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.361541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.606 [2024-11-20 17:21:49.361572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.606 [2024-11-20 17:21:49.361579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.606 [2024-11-20 17:21:49.361585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.606 [2024-11-20 17:21:49.361590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.606 [2024-11-20 17:21:49.361663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.361694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.361898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.361931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.362198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.362343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.362631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.362662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.362930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.362961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.363156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.363187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.363250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:31.606 [2024-11-20 17:21:49.363390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.363423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 [2024-11-20 17:21:49.363301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.363399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:31.606 [2024-11-20 17:21:49.363399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:31.606 [2024-11-20 17:21:49.363609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.363640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.363922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.363954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.364251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.364444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.364476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.364608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.364639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.364920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.365146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.365176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.365436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.365469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.365709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.365741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.366032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.366064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.366332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.366366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.366647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.366679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.366953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.366984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.367274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.367306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.367544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.367786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.367817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.368083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.368115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.606 [2024-11-20 17:21:49.368404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.606 [2024-11-20 17:21:49.368438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.606 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.368679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.368711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.369033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.369087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.369359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.369409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.369547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.369580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.369846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.369878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.370138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.370170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.370392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.370440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.370636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.370671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.370911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.370943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.371222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.371494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.371746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.371778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.372014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.372046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.372337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.372370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.372633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.372671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.372961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.372992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.373238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.373269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.373531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.373562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.373774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.373804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.374043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.374074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.374363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.374396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.374586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.374617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.374840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.374871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.375135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.375167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.375443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.375475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.375740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.375771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.376047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.376288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.376320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.376632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.376664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.376848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.376880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.377098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.607 [2024-11-20 17:21:49.377130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.607 qpair failed and we were unable to recover it. 00:27:31.607 [2024-11-20 17:21:49.377325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.377358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.377569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.377600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.377768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.377798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.378003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.378035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.378518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.378549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.378725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.378755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.378997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.379027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.379266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.379299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.379564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.379595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.379885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.379924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.380139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.380171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.380482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.380790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.380823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.381016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.381049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.381318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.381351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.381578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.381834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.382114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.382147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.382400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.382433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.382672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.382706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.382892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.382925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.383192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.383238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.383516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.383559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.383747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.383780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.384048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.384083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.384357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.384391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.384649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.384682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.384935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.384968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.385262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.385296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.385498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.385532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.385799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.386117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.386150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.386426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.386462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.386773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.387049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.608 [2024-11-20 17:21:49.387082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.608 qpair failed and we were unable to recover it. 00:27:31.608 [2024-11-20 17:21:49.387275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.387308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.387508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.387542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.387680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.387714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.387928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.387962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.388214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.388251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.388542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.388576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.388817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.388851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.389119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.389153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.389357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.389392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.389628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.389661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.389864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.389899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.390094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.390127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.390391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.390427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.390675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.390709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.390890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.390943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.391223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.391258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.391486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.391679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.391711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.391975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.392007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.392286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.392321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.392616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.392649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.392919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.392951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.393236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.393269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.393488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.393520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.393789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.393822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.394019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.394051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.394308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.394515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.394547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.394826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.394860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.395118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.395151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.395399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.395434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.395616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.395649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.395951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.396222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.396258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.396538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.396575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.396793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.396829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.397020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.397054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.609 [2024-11-20 17:21:49.397257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.609 [2024-11-20 17:21:49.397293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.609 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.397512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.397738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.397771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.398044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.398079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.398354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.398396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.398666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.398699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.398895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.398929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.399144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.399177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.399394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.399427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.399603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.399634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.399902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.399935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.400138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.400170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.400350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.400403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.400652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.400683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.400959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.400990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.401278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.401311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.401582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.401613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.401804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.401836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.402113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.402144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.402352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.402385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.402568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.402598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.402730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.402761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.402963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.402992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.403184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.403223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.403474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.403503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.403793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.403824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.404092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.404124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.404396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.404428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.404715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.404746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.404994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.405025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.405294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.405327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.405548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.405590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.405806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.405839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.406006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.406039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.406323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.406358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.406548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.406581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.406876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.406908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.407198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.610 [2024-11-20 17:21:49.407239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.610 qpair failed and we were unable to recover it. 00:27:31.610 [2024-11-20 17:21:49.407484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.407516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.407818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.407851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.408132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.408163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.408397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.408431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.408613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.408646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.408889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.408921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.409115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.409155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.409354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.409388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.409628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.409660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.409894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.409926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.410233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.410420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.410452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.410712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.410744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.410926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.410957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.411196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.411241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.411427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.411459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.411738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.412021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.412295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.412546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.412578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.412779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.412812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.413079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.413111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.413250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.413283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.413573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.413606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.413827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.414045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.414077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.414362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.414396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.414571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.414813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.414845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.415055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.415088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.415374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.415408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.415679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.415712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.415833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.415865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.416155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.416216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.416509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.416747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.416779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.416950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.416981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.611 [2024-11-20 17:21:49.417249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.611 [2024-11-20 17:21:49.417282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.611 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.417496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.417527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.417670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.417701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.418004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.418291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.418323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.418628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.418660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.418915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.418946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.419164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.419195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.419458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.419492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.419752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.419793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.420085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.420116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.420331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.420365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.420514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.420546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.420737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.420768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.421028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.421059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.421351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.421387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.421630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.421966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.421998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.422255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.422288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.422577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.422608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.422807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.423086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.423307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.423341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.423590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.423622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.423884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.423914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.424160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.424191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.424479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.424511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.424703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.424734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.424984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.425015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.425307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.425340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.425538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.425569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.425852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.425883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.426075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.426106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.426370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.426403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.612 [2024-11-20 17:21:49.426596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.612 [2024-11-20 17:21:49.426627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.612 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.426886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.427255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.427299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.427491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.427654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.427685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.427872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.427903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.428091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.428122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.428410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.428444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.428682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.428713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.428902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.428934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.429122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.429153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.429441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.429474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.429761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.429792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.430010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.430042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.430232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.430265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.430530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.430562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.430823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.430856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.431045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.431076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.431268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.431302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.431418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.431450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.431713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.431745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.431937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.431968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.432235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.432267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.432457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.432489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.432674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.432994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.433025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.433295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.433516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.433548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.433792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.434056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.434099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.434387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.434420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.434627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.434658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.434900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.434932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.435066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.435097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.435362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.435395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.435604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.435905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.436143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.436175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.436443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.613 [2024-11-20 17:21:49.436475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.613 qpair failed and we were unable to recover it. 00:27:31.613 [2024-11-20 17:21:49.436763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.436794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.437064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.437096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.437340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.437372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.437616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.437647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.437915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.437947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.438158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.438190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.438395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.438688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.438720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.439007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.439039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.439314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.439347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.439483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.439514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.439780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.439812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.440097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.440128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.440380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.440413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.440602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.440634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.440897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.440928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.441111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.441143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.441418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.441585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.441616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.441882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.441913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.442099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.442132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.442316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.442350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.442617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.442649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.442934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.442966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.443238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.443271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.443545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.443577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.443847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.443878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.444159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.444191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.444470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.444503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.444782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.444813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.445020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.445051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.445303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.445337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.445521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.445553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.445821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.445853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.446068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.446099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.446290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.446324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.446501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.446531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.446724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.614 [2024-11-20 17:21:49.446756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.614 qpair failed and we were unable to recover it. 00:27:31.614 [2024-11-20 17:21:49.446946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.446977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.447222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.447255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.447525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.447557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.447855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.448111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.448143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.448331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.448364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.448535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.448566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.448693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.448724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.448964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.448995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.449181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.449223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.449524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.449792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.449824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.450119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.450151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.450396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.450429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.450689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.450720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.450967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.450998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.451238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.451271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.451514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.451545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.451730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.451762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.452049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.452080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.452364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.452407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.452713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.452917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.452949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.453080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.453112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.453376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.453409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.453594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.453627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.453916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.453948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.454190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.454233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.454446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.454478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.454601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.454633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.454901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.454933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.455216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.455250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.455494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.455526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.455717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.455756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.456028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.456059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.456333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.456366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.615 qpair failed and we were unable to recover it. 00:27:31.615 [2024-11-20 17:21:49.456648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.615 [2024-11-20 17:21:49.456680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.456947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.456979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.457276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.457309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.616 [2024-11-20 17:21:49.457529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:31.616 [2024-11-20 17:21:49.457561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.616 [2024-11-20 17:21:49.457751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.457782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.616 [2024-11-20 17:21:49.457962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.457994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.458132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.458164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.458367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.458400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.458647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.458679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.458873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.459116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.459149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.459303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.459337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.459527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.459560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.459744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.459775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.460051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.460084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.460275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.460309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.460575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.460796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.460829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.461090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.461122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.461412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.461445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.461568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.461600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.461779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.461815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc474000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.461998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.462038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.462307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.462341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.462496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.462528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.462722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.462753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.463035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.463067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.463258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.463291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.463555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.463587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.463826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.463857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.464099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.464130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.464402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.464434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.464620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.616 [2024-11-20 17:21:49.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.616 qpair failed and we were unable to recover it. 00:27:31.616 [2024-11-20 17:21:49.464854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.464885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.465147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.465177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.465477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.465720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.465752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.465872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.465903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.466192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.466238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.466359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.466390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.466551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.466761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.466793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.467059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.467090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.467276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.467309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.467495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.467528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.467747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.468032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.468065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.468215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.468247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.468487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.468782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.468814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.468988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.469020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.469310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.469343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.469479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.469513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.469660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.469692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.469825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.469861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.470061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.470093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.470299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.470332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.470571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.470602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.470785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.471008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.471040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.471259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.471413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.471444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.471666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.471702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.471848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.471881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.617 [2024-11-20 17:21:49.472075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.617 [2024-11-20 17:21:49.472107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.617 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.472332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.472573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.472605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.472720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.472749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.472909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.473173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.473214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.473391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.473422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.473560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.473591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.473877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.474055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.474086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.474197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.474240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.474439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.474592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.474624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.474876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.475000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.475030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.475317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.475350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.475594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.475626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.475834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.475865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.476096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.476127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.476323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.476358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.476600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.476632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.476833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.476866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.477132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.477164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.477310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.477344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.477493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.477524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.477658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.477696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.477910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.477942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.478157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.478189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.478357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.478390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.478575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.478606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.478740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.478957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.478988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.479184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.479227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.479369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.479401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.479669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.479705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.479905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.479936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.480125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.480366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-20 17:21:49.480399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.618 qpair failed and we were unable to recover it. 00:27:31.618 [2024-11-20 17:21:49.480546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.480577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.480862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.480894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.481077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.481339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.481372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.481503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.481534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.481662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.481693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.481835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.481866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.482080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.482114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.482296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.482329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.482478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.482509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.482733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.482765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.483026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.483243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.483277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.483407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.483439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.483620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.483747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.483778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.483987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.484018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.484245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.484278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.484440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.484582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.484613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.484832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.484864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.485126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.485158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.485310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.485343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.485584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.485616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.485757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.485788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.485964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.485994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.486188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.486226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.486422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.486459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.486598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.486630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.486772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.486803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.487050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.487082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.487327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.487360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.487500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.487531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.487667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.487699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.619 qpair failed and we were unable to recover it. 00:27:31.619 [2024-11-20 17:21:49.487925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-20 17:21:49.487955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.488177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.488225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.488405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.488436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.488583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.488613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.488821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.488851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.489159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.489190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.489334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.489366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.489481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.489513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.489802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.489833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.490089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.490523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.490554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.490695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.490729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.490951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.490982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.491247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.491279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.491470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.491501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.491644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.491675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.491946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.491979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.492169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.492218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.492458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.492505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.492752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.492784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.493090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.493122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.493437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.493581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.493613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.493808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.493839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.494152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.494187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.620 [2024-11-20 17:21:49.494335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.494369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.494565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.494596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:31.620 [2024-11-20 17:21:49.494860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.494894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.620 [2024-11-20 17:21:49.495101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.495135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.620 [2024-11-20 17:21:49.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.495376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.495569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.495601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.495869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.495900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.496092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.496124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.620 [2024-11-20 17:21:49.496261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-20 17:21:49.496294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.620 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.496578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.496610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.496727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.496758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.497005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.497036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.497285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.497319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.497586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.497617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.497832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.497864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.498104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.498135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.498355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.498387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.498574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.498605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.498893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.498925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.499186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.499228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.499417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.499604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.499636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.499774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.499806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.500088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.500120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.500260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.500293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.500479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.500511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.500776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.500807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.500995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.501027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.501295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.501328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.501537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.501568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.501708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.501739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.502026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.502066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.502268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.502305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.502602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.502634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.502846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.502878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.503135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.503167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.503469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.503503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.503661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.503856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.503887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.504127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.504159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.504384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.504417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.504598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.504630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.504920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.504951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.505091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.505121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.505318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.505350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.621 [2024-11-20 17:21:49.505572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-20 17:21:49.505602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.621 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.505747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.505778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.505987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.506019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.506291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.506324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.506500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.506532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.506667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.506699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.506968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.506999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.507196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.507237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.507373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.507405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.507605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.507635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.507854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.507886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.508152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.508183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.508446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.508478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.508662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.508699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.508965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.508997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.509273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.509305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.509492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.509710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.509743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.509885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.509916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.510087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.510117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.510319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.510351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.510618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.510648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.510925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.510957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.511157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.511189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.511467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.511500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.511697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.511729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.512031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.512062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.512339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.512372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.512635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.512668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.512881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.512913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.622 [2024-11-20 17:21:49.513157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.622 [2024-11-20 17:21:49.513188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.622 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.513377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.513409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.513626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.513657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.513787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.513817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.514058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.514089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.514360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.514601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.514632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.514887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.514918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.515175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.515216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.515415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.515446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.515645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.515683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.515827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.515859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.516142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.516173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.516478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.516514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.516848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.516880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.517146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.517177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.517391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.517424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.517697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.517986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.518017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.518228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.518260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.518455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.518487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.518674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.518705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.518898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.518930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.519143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.519174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.519339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.519372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.519586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.519619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.519888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.519920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.520111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.520143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.520390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.520424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.520690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.520723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.520962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.520993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.521187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.521433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.521467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.521595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.521877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.521910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.522150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.522182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.522453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.623 [2024-11-20 17:21:49.522487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc46c000b90 with addr=10.0.0.2, port=4420 00:27:31.623 qpair failed and we were unable to recover it. 00:27:31.623 [2024-11-20 17:21:49.522688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.522728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.522925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.522958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.523080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.523112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.523294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.523327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.523566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.523598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.523844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.523878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.524152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.524185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.524355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.524388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.524588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.524619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.524838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.524870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.525054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.525085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.525216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.525248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.525485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.525517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.525643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.525891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.525922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.526055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.526086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.526297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.526330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.526518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.526550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.526709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 Malloc0 00:27:31.624 [2024-11-20 17:21:49.526923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.526955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.527151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.527183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.624 [2024-11-20 17:21:49.527299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.527331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.624 [2024-11-20 17:21:49.527476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.527508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.624 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.624 [2024-11-20 17:21:49.527701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.527733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.527917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.527948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.528065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.528103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.528277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.528310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.528418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.528449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.528641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.528671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.528860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.528891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.529156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.529186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.529310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.529341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.529644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.529760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.529791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.529931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.624 [2024-11-20 17:21:49.529962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.624 qpair failed and we were unable to recover it. 00:27:31.624 [2024-11-20 17:21:49.530214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.530416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.530445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.530626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.530656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.530713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.625 [2024-11-20 17:21:49.530782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.531011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.531042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.531221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.531254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.531437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.531467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.531638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.531668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.531864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.531895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.532020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.532049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.532238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.532272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.532412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.532443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.532611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.532643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.532831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.532860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.533098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.533129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.533302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.533364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.533552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.533582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.533821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.533864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.534070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.534101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.534360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.534393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.534606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.534869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.534900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.535193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.535362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.535393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.535639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.535670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.535858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.535888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.536018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.536049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.536259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.536293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.536476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.536507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.536709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.536739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.536959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.537077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.537107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.537304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.537337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.537488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.537518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.537711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.625 [2024-11-20 17:21:49.537742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.625 qpair failed and we were unable to recover it. 00:27:31.625 [2024-11-20 17:21:49.537915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.537946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.538196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.538236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.538369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.538399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.538531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.538562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.538738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.538768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.626 [2024-11-20 17:21:49.539026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.539057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.626 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.626 [2024-11-20 17:21:49.539241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.539275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.626 [2024-11-20 17:21:49.539558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.539595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.539747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.539779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.540014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.540045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.540240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.540273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.540478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.540509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.540751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.541046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.541076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.541343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.541375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.541617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.541648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.541777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.541807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.541994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.542225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.542257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.542405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.542436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.542618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.542878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.542909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.543230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.543471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.543502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.543636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.543667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.543860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.543892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.544108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.544139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.544313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.544346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.544533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.544565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.544753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.544784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.544976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.545007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.626 [2024-11-20 17:21:49.545267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.626 [2024-11-20 17:21:49.545301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.626 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.545558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.545589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.545703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.545734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.545998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.546029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.546165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.546197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.546496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.546527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.546715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.627 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.627 [2024-11-20 17:21:49.547026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.547058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.627 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.627 [2024-11-20 17:21:49.547261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.547293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.547485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.547516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.547697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.547728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.547992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.548023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.548171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.548365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.548398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.548574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.548604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.548846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.548883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.549140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.549450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.549491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.549745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.549776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.550024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.550056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.550182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.550222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.550360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.550391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.550635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.550666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.550859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.550890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.551014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.551225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.551522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.551552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.551791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.551822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.552039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.552070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.552220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.552253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.552482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.552512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.552648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.552679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.552924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.553097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.627 [2024-11-20 17:21:49.553127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.627 qpair failed and we were unable to recover it. 00:27:31.627 [2024-11-20 17:21:49.553303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.553336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.553539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.553698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.553728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.553916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.553947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.554155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.554185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.554381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.554412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.554593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.554624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.554773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.628 [2024-11-20 17:21:49.554969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.555000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.628 [2024-11-20 17:21:49.555140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.628 [2024-11-20 17:21:49.555171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc468000b90 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.628 [2024-11-20 17:21:49.555370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.555406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.555670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.555701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.555938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.555969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.556092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.556123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.556333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.556365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.556562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.556594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.556720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.556751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.556992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.557024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.557214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.557246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.557427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.557465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.557727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.557758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.557949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.557980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.558199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.558448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.558478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.628 [2024-11-20 17:21:49.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e4ba0 with addr=10.0.0.2, port=4420 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.558946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.628 [2024-11-20 17:21:49.561401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.628 [2024-11-20 17:21:49.561505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.628 [2024-11-20 17:21:49.561550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.628 [2024-11-20 17:21:49.561573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.628 [2024-11-20 17:21:49.561595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.628 [2024-11-20 17:21:49.561646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.628 17:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2655167 00:27:31.628 [2024-11-20 17:21:49.571288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.628 [2024-11-20 17:21:49.571389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.628 [2024-11-20 17:21:49.571428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.628 [2024-11-20 17:21:49.571449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.628 [2024-11-20 17:21:49.571476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.628 [2024-11-20 17:21:49.571516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.628 [2024-11-20 17:21:49.581307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.628 [2024-11-20 17:21:49.581382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.628 [2024-11-20 17:21:49.581406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.628 [2024-11-20 17:21:49.581419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.628 [2024-11-20 17:21:49.581430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.628 [2024-11-20 17:21:49.581457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.628 qpair failed and we were unable to recover it. 00:27:31.629 [2024-11-20 17:21:49.591315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.629 [2024-11-20 17:21:49.591389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.629 [2024-11-20 17:21:49.591407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.629 [2024-11-20 17:21:49.591416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.629 [2024-11-20 17:21:49.591424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.629 [2024-11-20 17:21:49.591442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.629 qpair failed and we were unable to recover it. 00:27:31.629 [2024-11-20 17:21:49.601284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.629 [2024-11-20 17:21:49.601342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.629 [2024-11-20 17:21:49.601356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.629 [2024-11-20 17:21:49.601362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.629 [2024-11-20 17:21:49.601368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.629 [2024-11-20 17:21:49.601382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.629 qpair failed and we were unable to recover it. 00:27:31.629 [2024-11-20 17:21:49.611299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.629 [2024-11-20 17:21:49.611351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.629 [2024-11-20 17:21:49.611365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.629 [2024-11-20 17:21:49.611371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.629 [2024-11-20 17:21:49.611377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.629 [2024-11-20 17:21:49.611391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.629 qpair failed and we were unable to recover it. 00:27:31.629 [2024-11-20 17:21:49.621340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.629 [2024-11-20 17:21:49.621396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.629 [2024-11-20 17:21:49.621410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.629 [2024-11-20 17:21:49.621416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.629 [2024-11-20 17:21:49.621422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.629 [2024-11-20 17:21:49.621436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.629 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.631373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.631435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.631449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.631456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.631462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.631476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.641404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.641466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.641480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.641487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.641493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.641507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.651413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.651505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.651519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.651525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.651531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.651545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.661422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.661474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.661492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.661498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.661504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.661518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.671483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.671544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.671557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.671565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.671570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.671584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.681486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.681550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.681563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.681570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.681576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.681590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.691508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.691560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.691574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.691581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.691587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.691601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.701525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.701578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.701592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.701599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.701608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.701622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.711566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.711621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.711636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.711642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.711648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.887 [2024-11-20 17:21:49.711663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.887 qpair failed and we were unable to recover it. 00:27:31.887 [2024-11-20 17:21:49.721579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.887 [2024-11-20 17:21:49.721627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.887 [2024-11-20 17:21:49.721641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.887 [2024-11-20 17:21:49.721647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.887 [2024-11-20 17:21:49.721654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.721668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.731640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.731695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.731709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.731716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.731722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.731736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.741646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.741702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.741716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.741722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.741728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.741742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.751686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.751749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.751762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.751769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.751775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.751788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.761690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.761740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.761753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.761760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.761765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.761780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.771713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.771763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.771776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.771782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.771789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.771802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.781738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.781790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.781803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.781810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.781816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.781830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.791794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.791872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.791889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.791895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.791901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.791915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.801870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.801947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.801960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.801967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.801973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.801987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.811809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.811902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.811915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.811922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.811927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.811942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.821855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.888 [2024-11-20 17:21:49.821911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.888 [2024-11-20 17:21:49.821924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.888 [2024-11-20 17:21:49.821931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.888 [2024-11-20 17:21:49.821937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.888 [2024-11-20 17:21:49.821951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.888 qpair failed and we were unable to recover it. 00:27:31.888 [2024-11-20 17:21:49.831926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.832033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.832047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.832054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.832062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.832077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.841921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.841975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.841989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.841995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.842001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.842015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.851885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.851939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.851953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.851960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.851965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.851979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.862002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.862060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.862074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.862081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.862086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.862100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.871992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.872096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.872110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.872117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.872122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.872136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.882019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.882077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.882091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.882097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.882103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.882117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.892053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.892104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.892118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.892124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.892130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.892144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.901999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.902056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.902069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.902076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.902082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.902096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.912113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.912183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.912197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.912207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.912213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.912228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:31.889 [2024-11-20 17:21:49.922139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.889 [2024-11-20 17:21:49.922191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.889 [2024-11-20 17:21:49.922213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.889 [2024-11-20 17:21:49.922220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.889 [2024-11-20 17:21:49.922226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:31.889 [2024-11-20 17:21:49.922241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.889 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.932211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.932271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.148 [2024-11-20 17:21:49.932285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.148 [2024-11-20 17:21:49.932292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.148 [2024-11-20 17:21:49.932298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.148 [2024-11-20 17:21:49.932312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.148 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.942212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.942263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.148 [2024-11-20 17:21:49.942276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.148 [2024-11-20 17:21:49.942283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.148 [2024-11-20 17:21:49.942289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.148 [2024-11-20 17:21:49.942303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.148 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.952232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.952309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.148 [2024-11-20 17:21:49.952323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.148 [2024-11-20 17:21:49.952329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.148 [2024-11-20 17:21:49.952335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.148 [2024-11-20 17:21:49.952350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.148 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.962238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.962480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.148 [2024-11-20 17:21:49.962496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.148 [2024-11-20 17:21:49.962507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.148 [2024-11-20 17:21:49.962513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.148 [2024-11-20 17:21:49.962529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.148 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.972263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.972320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.148 [2024-11-20 17:21:49.972333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.148 [2024-11-20 17:21:49.972340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.148 [2024-11-20 17:21:49.972346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.148 [2024-11-20 17:21:49.972360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.148 qpair failed and we were unable to recover it. 00:27:32.148 [2024-11-20 17:21:49.982222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.148 [2024-11-20 17:21:49.982276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:49.982289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:49.982296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:49.982302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:49.982317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:49.992272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:49.992328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:49.992343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:49.992350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:49.992357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:49.992371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.002373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.002458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.002474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.002482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.002488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.002505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.012467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.012556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.012572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.012581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.012587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.012603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.022446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.022509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.022527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.022534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.022541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.022556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.032462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.032529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.032545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.032552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.032558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.032573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.042540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.042638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.042653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.042662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.042669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.042684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.052572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.052634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.052656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.052664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.052670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.052687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.062464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.062524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.062539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.062546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.062552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.062567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.072563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.072620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.072635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.072641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.072648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.072662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.082580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.082634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.082648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.082655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.082661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.082675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.092545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.092597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.092611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.092621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.092628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.092642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.102692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.102758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.102777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.102785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.102791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.102808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.112690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.112748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.112762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.112769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.112775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.112789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.122768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.122876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.122893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.122900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.122906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.122922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.132703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.132760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.132774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.132781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.132788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.132802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.142759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.142819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.142833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.142840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.142846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.142860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.152777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.152841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.152855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.152861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.152867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.152882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.162811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.162869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.162883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.149 [2024-11-20 17:21:50.162889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.149 [2024-11-20 17:21:50.162895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.149 [2024-11-20 17:21:50.162909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.149 qpair failed and we were unable to recover it. 00:27:32.149 [2024-11-20 17:21:50.172871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.149 [2024-11-20 17:21:50.172932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.149 [2024-11-20 17:21:50.172946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.150 [2024-11-20 17:21:50.172953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.150 [2024-11-20 17:21:50.172959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.150 [2024-11-20 17:21:50.172974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.150 qpair failed and we were unable to recover it. 00:27:32.150 [2024-11-20 17:21:50.182865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.150 [2024-11-20 17:21:50.182921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.150 [2024-11-20 17:21:50.182939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.150 [2024-11-20 17:21:50.182946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.150 [2024-11-20 17:21:50.182951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.150 [2024-11-20 17:21:50.182965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.150 qpair failed and we were unable to recover it. 00:27:32.408 [2024-11-20 17:21:50.192921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.192991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.193004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.193011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.193017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.193031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.202937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.202998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.203014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.203021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.203027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.203041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.212964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.213016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.213030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.213036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.213042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.213056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.222929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.222987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.223001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.223010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.223016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.223030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.233033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.233092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.233107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.233114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.233120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.233134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.243051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.243104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.243118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.243125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.243131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.243145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.253033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.253090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.253106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.253113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.253119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.253134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.263131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.263192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.263209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.263216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.263222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.263236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.273140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.273200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.273218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.273225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.273231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.273244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.283166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.283226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.283239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.283246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.283252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.283265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.293180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.293241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.293254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.293261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.293267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.293281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.303220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.303280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.303294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.303301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.303306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.303319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.409 [2024-11-20 17:21:50.313263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.409 [2024-11-20 17:21:50.313321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.409 [2024-11-20 17:21:50.313338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.409 [2024-11-20 17:21:50.313345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.409 [2024-11-20 17:21:50.313351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.409 [2024-11-20 17:21:50.313364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.409 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.323272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.323332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.323345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.323352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.323358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.323370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.333296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.333354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.333368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.333375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.333381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.333395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.343315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.343375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.343388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.343395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.343401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.343414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.353373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.353429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.353442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.353452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.353458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.353471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.363375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.363429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.363442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.363449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.363455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.363468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.373510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.373576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.373590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.373596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.373602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.373616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.383416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.383475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.383488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.383495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.383501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.383514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.393550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.393610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.393623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.393629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.393635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.393649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.403524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.403578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.403591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.403598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.403605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.403618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.413559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.413642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.413656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.413663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.413669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.413682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.423508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.423566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.423579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.423586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.423592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.423606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.433583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.433641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.433654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.410 [2024-11-20 17:21:50.433661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.410 [2024-11-20 17:21:50.433667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.410 [2024-11-20 17:21:50.433681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.410 qpair failed and we were unable to recover it. 00:27:32.410 [2024-11-20 17:21:50.443578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.410 [2024-11-20 17:21:50.443668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.410 [2024-11-20 17:21:50.443683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.411 [2024-11-20 17:21:50.443689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.411 [2024-11-20 17:21:50.443695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.411 [2024-11-20 17:21:50.443709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.411 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.453586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.453648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.453662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.453669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.453674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.453688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.463667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.463722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.463736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.463743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.463749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.463762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.473635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.473694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.473707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.473714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.473720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.473734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.483657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.483713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.483726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.483736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.483742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.483756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.493754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.493811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.493824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.493831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.493837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.493850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.503718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.503810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.503823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.503829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.503835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.503849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.513782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.513839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.513853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.513860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.513866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.513880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.523778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.523832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.523846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.523853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.523859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.523873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.533857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.533916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.670 [2024-11-20 17:21:50.533930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.670 [2024-11-20 17:21:50.533938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.670 [2024-11-20 17:21:50.533943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.670 [2024-11-20 17:21:50.533957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.670 qpair failed and we were unable to recover it. 00:27:32.670 [2024-11-20 17:21:50.543830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.670 [2024-11-20 17:21:50.543886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.543900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.543906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.543912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.543926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.553988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.554048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.554061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.554068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.554074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.554087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.563918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.563976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.563989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.563995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.564001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.564014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.573946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.573998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.574012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.574018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.574024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.574038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.583962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.584012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.584025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.584031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.584038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.584051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.593983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.594047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.594061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.594068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.594073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.594087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.604072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.604152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.604167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.604174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.604180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.604194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.614019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.614076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.614090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.614100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.614106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.614120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.624121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.624215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.624230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.624237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.624243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.671 [2024-11-20 17:21:50.624257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.671 qpair failed and we were unable to recover it. 00:27:32.671 [2024-11-20 17:21:50.634181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.671 [2024-11-20 17:21:50.634249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.671 [2024-11-20 17:21:50.634263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.671 [2024-11-20 17:21:50.634270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.671 [2024-11-20 17:21:50.634276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.634290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.644235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.644330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.644343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.644350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.644356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.644370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.654189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.654254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.654269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.654276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.654282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.654296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.664288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.664355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.664369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.664376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.664382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.664396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.674209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.674271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.674285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.674291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.674298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.674311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.684315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.684376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.684389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.684396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.684402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.684415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.694321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.694381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.694394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.694400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.694406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.694420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-11-20 17:21:50.704302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-11-20 17:21:50.704381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-11-20 17:21:50.704394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-11-20 17:21:50.704400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-11-20 17:21:50.704406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.672 [2024-11-20 17:21:50.704420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.931 [2024-11-20 17:21:50.714384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.931 [2024-11-20 17:21:50.714446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.931 [2024-11-20 17:21:50.714460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.931 [2024-11-20 17:21:50.714466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.931 [2024-11-20 17:21:50.714472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.931 [2024-11-20 17:21:50.714486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.931 qpair failed and we were unable to recover it. 00:27:32.931 [2024-11-20 17:21:50.724409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.931 [2024-11-20 17:21:50.724469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.931 [2024-11-20 17:21:50.724482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.931 [2024-11-20 17:21:50.724489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.931 [2024-11-20 17:21:50.724495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.931 [2024-11-20 17:21:50.724509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.931 qpair failed and we were unable to recover it. 00:27:32.931 [2024-11-20 17:21:50.734366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.931 [2024-11-20 17:21:50.734425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.931 [2024-11-20 17:21:50.734440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.931 [2024-11-20 17:21:50.734448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.931 [2024-11-20 17:21:50.734454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.931 [2024-11-20 17:21:50.734469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.931 qpair failed and we were unable to recover it. 00:27:32.931 [2024-11-20 17:21:50.744392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.931 [2024-11-20 17:21:50.744454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.931 [2024-11-20 17:21:50.744468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.931 [2024-11-20 17:21:50.744478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.931 [2024-11-20 17:21:50.744484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.931 [2024-11-20 17:21:50.744498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.931 qpair failed and we were unable to recover it. 00:27:32.931 [2024-11-20 17:21:50.754550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.931 [2024-11-20 17:21:50.754606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.931 [2024-11-20 17:21:50.754619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.931 [2024-11-20 17:21:50.754626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.931 [2024-11-20 17:21:50.754632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.931 [2024-11-20 17:21:50.754646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.764521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.764578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.764592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.764599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.764605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.764619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.774533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.774602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.774615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.774622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.774628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.774641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.784577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.784642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.784656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.784663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.784668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.784685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.794602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.794658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.794672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.794678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.794685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.794698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.804641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.804699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.804713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.804720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.804726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.804740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.814656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.814711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.814724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.814731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.814737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.814750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.824667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.824724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.824738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.824744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.824750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.824764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.834731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.834801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.834815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.834822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.834828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.834842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.844752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.844807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.844820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.844827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.844833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.844846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.854775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.854829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.854843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.854850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.854855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.854869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.864798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.864851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.864864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.864871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.864877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.864890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.874833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.874886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.874900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.874909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.874915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.874929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.884865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.884920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.884933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.884940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.884946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.884960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.894895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.894944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.894958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.894964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.894970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.894984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.904921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.904984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.904998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.905005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.905010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.905024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.914952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.915007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.915021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.915028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.915033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.932 [2024-11-20 17:21:50.915050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.932 qpair failed and we were unable to recover it. 00:27:32.932 [2024-11-20 17:21:50.924980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.932 [2024-11-20 17:21:50.925036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.932 [2024-11-20 17:21:50.925050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.932 [2024-11-20 17:21:50.925056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.932 [2024-11-20 17:21:50.925062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.933 [2024-11-20 17:21:50.925075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.933 qpair failed and we were unable to recover it. 00:27:32.933 [2024-11-20 17:21:50.935009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.933 [2024-11-20 17:21:50.935067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.933 [2024-11-20 17:21:50.935080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.933 [2024-11-20 17:21:50.935087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.933 [2024-11-20 17:21:50.935092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.933 [2024-11-20 17:21:50.935106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.933 qpair failed and we were unable to recover it. 00:27:32.933 [2024-11-20 17:21:50.945078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.933 [2024-11-20 17:21:50.945137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.933 [2024-11-20 17:21:50.945151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.933 [2024-11-20 17:21:50.945157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.933 [2024-11-20 17:21:50.945163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.933 [2024-11-20 17:21:50.945176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.933 qpair failed and we were unable to recover it. 00:27:32.933 [2024-11-20 17:21:50.955124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.933 [2024-11-20 17:21:50.955194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.933 [2024-11-20 17:21:50.955210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.933 [2024-11-20 17:21:50.955217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.933 [2024-11-20 17:21:50.955223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.933 [2024-11-20 17:21:50.955237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.933 qpair failed and we were unable to recover it. 00:27:32.933 [2024-11-20 17:21:50.965103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.933 [2024-11-20 17:21:50.965164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.933 [2024-11-20 17:21:50.965178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.933 [2024-11-20 17:21:50.965185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.933 [2024-11-20 17:21:50.965190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:32.933 [2024-11-20 17:21:50.965207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.933 qpair failed and we were unable to recover it. 00:27:33.191 [2024-11-20 17:21:50.975141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.191 [2024-11-20 17:21:50.975192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.191 [2024-11-20 17:21:50.975209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.191 [2024-11-20 17:21:50.975216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.191 [2024-11-20 17:21:50.975222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.191 [2024-11-20 17:21:50.975236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.191 qpair failed and we were unable to recover it. 00:27:33.191 [2024-11-20 17:21:50.985151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.191 [2024-11-20 17:21:50.985234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.191 [2024-11-20 17:21:50.985247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.191 [2024-11-20 17:21:50.985253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.191 [2024-11-20 17:21:50.985259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.191 [2024-11-20 17:21:50.985272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.191 qpair failed and we were unable to recover it. 00:27:33.191 [2024-11-20 17:21:50.995186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.191 [2024-11-20 17:21:50.995267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.191 [2024-11-20 17:21:50.995281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.191 [2024-11-20 17:21:50.995287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.191 [2024-11-20 17:21:50.995293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.191 [2024-11-20 17:21:50.995306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.191 qpair failed and we were unable to recover it. 00:27:33.191 [2024-11-20 17:21:51.005252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.191 [2024-11-20 17:21:51.005313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.191 [2024-11-20 17:21:51.005327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.191 [2024-11-20 17:21:51.005336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.191 [2024-11-20 17:21:51.005342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.191 [2024-11-20 17:21:51.005356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.191 qpair failed and we were unable to recover it. 00:27:33.191 [2024-11-20 17:21:51.015245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.191 [2024-11-20 17:21:51.015299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.191 [2024-11-20 17:21:51.015313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.015320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.015326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.015340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.025277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.025333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.025346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.025352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.025358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.025371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.035356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.035411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.035424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.035431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.035437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.035450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.045326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.045396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.045409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.045415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.045421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.045439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.055355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.055409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.055423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.055429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.055435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.055449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.065383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.065433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.065447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.065453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.065460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.065474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.075348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.075406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.075420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.075427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.075433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.075447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.085453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.085509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.085522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.085528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.085534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.085547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.095499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.095554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.095568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.095574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.095580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.095594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.105489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.105536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.105550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.105556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.105562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.105576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.115535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.115591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.115604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.115610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.115616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.115630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.125559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.125615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.125632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.125639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.125645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.125659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.135581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.135642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.135657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.192 [2024-11-20 17:21:51.135667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.192 [2024-11-20 17:21:51.135673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.192 [2024-11-20 17:21:51.135687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.192 qpair failed and we were unable to recover it. 00:27:33.192 [2024-11-20 17:21:51.145626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.192 [2024-11-20 17:21:51.145676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.192 [2024-11-20 17:21:51.145690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.145697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.145703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.145716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.155667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.155723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.155736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.155743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.155749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.155763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.165679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.165733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.165746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.165753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.165759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.165772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.175749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.175809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.175822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.175828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.175834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.175851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.185734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.185790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.185803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.185810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.185816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.185831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.195776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.195850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.195864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.195871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.195877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.195891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.205794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.205853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.205867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.205874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.205880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.205893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.215847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.215901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.215917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.215925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.215931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.215945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.193 [2024-11-20 17:21:51.225833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.193 [2024-11-20 17:21:51.225886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.193 [2024-11-20 17:21:51.225899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.193 [2024-11-20 17:21:51.225905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.193 [2024-11-20 17:21:51.225911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.193 [2024-11-20 17:21:51.225925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.193 qpair failed and we were unable to recover it. 00:27:33.453 [2024-11-20 17:21:51.235822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.453 [2024-11-20 17:21:51.235912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.453 [2024-11-20 17:21:51.235926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.453 [2024-11-20 17:21:51.235933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.453 [2024-11-20 17:21:51.235938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.453 [2024-11-20 17:21:51.235953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-11-20 17:21:51.245900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.453 [2024-11-20 17:21:51.245957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.453 [2024-11-20 17:21:51.245971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.453 [2024-11-20 17:21:51.245977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.453 [2024-11-20 17:21:51.245983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.453 [2024-11-20 17:21:51.245997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-11-20 17:21:51.255971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.453 [2024-11-20 17:21:51.256067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.453 [2024-11-20 17:21:51.256081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.453 [2024-11-20 17:21:51.256088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.453 [2024-11-20 17:21:51.256094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.453 [2024-11-20 17:21:51.256108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-11-20 17:21:51.265972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.453 [2024-11-20 17:21:51.266030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.453 [2024-11-20 17:21:51.266043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.453 [2024-11-20 17:21:51.266054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.453 [2024-11-20 17:21:51.266059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.453 [2024-11-20 17:21:51.266073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.453 [2024-11-20 17:21:51.275978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.453 [2024-11-20 17:21:51.276032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.453 [2024-11-20 17:21:51.276046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.453 [2024-11-20 17:21:51.276053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.453 [2024-11-20 17:21:51.276059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.453 [2024-11-20 17:21:51.276072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.453 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.286009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.286065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.286079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.286085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.286091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.286105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.296099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.296196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.296214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.296221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.296226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.296240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.306034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.306083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.306096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.306103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.306109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.306126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.316121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.316175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.316189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.316196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.316206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.316221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.326124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.326180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.326192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.326199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.326208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.326222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.336150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.336205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.336219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.336226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.336232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.336246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.346174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.346250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.346263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.346270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.346276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.346289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.356217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.356285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.356298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.356305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.356310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.356324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.366252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.366307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.366321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.366328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.366334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.366348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.376269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.376318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.376332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.376338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.376345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.376359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.386302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.386354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.386368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.386374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.386379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.386394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.396315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.396368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.396382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.396391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.396397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.396411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.454 [2024-11-20 17:21:51.406390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.454 [2024-11-20 17:21:51.406448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.454 [2024-11-20 17:21:51.406461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.454 [2024-11-20 17:21:51.406468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.454 [2024-11-20 17:21:51.406474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.454 [2024-11-20 17:21:51.406488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.454 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.416387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.416439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.416452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.416459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.416465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.416478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.426427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.426480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.426493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.426500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.426506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.426519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.436459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.436513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.436527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.436533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.436539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.436557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.446539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.446595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.446608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.446615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.446621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.446635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.456517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.456572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.456585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.456591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.456597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.456611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.466528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.466581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.466594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.466601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.466606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.466620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.476592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.476696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.476710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.476716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.476722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.476735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.455 [2024-11-20 17:21:51.486589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.455 [2024-11-20 17:21:51.486644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.455 [2024-11-20 17:21:51.486657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.455 [2024-11-20 17:21:51.486664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.455 [2024-11-20 17:21:51.486669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.455 [2024-11-20 17:21:51.486683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.455 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.496637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.496732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.496746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.496753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.496759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.496773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.506638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.506716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.506729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.506736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.506742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.506755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.516703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.516762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.516776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.516782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.516788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.516802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.526709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.526766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.526779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.526789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.526794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.526808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.536730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.536784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.536797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.536803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.536809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.536822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.546755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.546808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.546821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.546828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.546834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.546848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.556828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.556917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.556930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.556937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.556943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.556956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.566825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.566876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.566889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.566896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.566901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.566918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.576776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.576831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.576844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.576851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.576857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.716 [2024-11-20 17:21:51.576871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.716 qpair failed and we were unable to recover it. 00:27:33.716 [2024-11-20 17:21:51.586869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.716 [2024-11-20 17:21:51.586922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.716 [2024-11-20 17:21:51.586935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.716 [2024-11-20 17:21:51.586941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.716 [2024-11-20 17:21:51.586947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.586961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.596931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.597001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.597014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.597020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.597026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.597039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.606947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.607002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.607015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.607021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.607027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.607040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.616964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.617051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.617064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.617070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.617076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.617089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.626987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.627047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.627060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.627067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.627073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.627086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.637033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.637101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.637115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.637122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.637128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.637142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.647046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.647102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.647115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.647121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.647127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.647140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.657126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.657179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.657193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.657211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.657217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.657231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.667095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.667149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.667163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.667170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.667175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.667190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.677224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.677278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.677291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.677297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.677303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.677317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.687178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.687239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.687252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.687259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.687264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.687278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.697132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.697186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.697200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.697211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.697216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.697233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.707223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.707281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.707293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.707300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.707306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.717 [2024-11-20 17:21:51.707319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.717 qpair failed and we were unable to recover it. 00:27:33.717 [2024-11-20 17:21:51.717256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.717 [2024-11-20 17:21:51.717324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.717 [2024-11-20 17:21:51.717339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.717 [2024-11-20 17:21:51.717346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.717 [2024-11-20 17:21:51.717351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.718 [2024-11-20 17:21:51.717366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.718 qpair failed and we were unable to recover it. 00:27:33.718 [2024-11-20 17:21:51.727208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.718 [2024-11-20 17:21:51.727265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.718 [2024-11-20 17:21:51.727278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.718 [2024-11-20 17:21:51.727285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.718 [2024-11-20 17:21:51.727291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.718 [2024-11-20 17:21:51.727305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.718 qpair failed and we were unable to recover it. 00:27:33.718 [2024-11-20 17:21:51.737265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.718 [2024-11-20 17:21:51.737317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.718 [2024-11-20 17:21:51.737331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.718 [2024-11-20 17:21:51.737338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.718 [2024-11-20 17:21:51.737344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.718 [2024-11-20 17:21:51.737359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.718 qpair failed and we were unable to recover it. 00:27:33.718 [2024-11-20 17:21:51.747366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.718 [2024-11-20 17:21:51.747419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.718 [2024-11-20 17:21:51.747432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.718 [2024-11-20 17:21:51.747439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.718 [2024-11-20 17:21:51.747445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.718 [2024-11-20 17:21:51.747458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.718 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.757310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.757367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.757382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.757388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.757394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.757408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.767390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.767449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.767463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.767470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.767476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.767489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.777428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.777485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.777498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.777505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.777511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.777524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.787483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.787539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.787552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.787563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.787568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.787582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.797460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.797546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.797559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.797565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.797570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.797584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.807525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.807583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.807596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.807603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.807609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.807622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.817467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.817527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.817540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.817547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.817552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.817566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.827583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.981 [2024-11-20 17:21:51.827639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.981 [2024-11-20 17:21:51.827652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.981 [2024-11-20 17:21:51.827659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.981 [2024-11-20 17:21:51.827665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.981 [2024-11-20 17:21:51.827681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.981 qpair failed and we were unable to recover it. 00:27:33.981 [2024-11-20 17:21:51.837607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.837661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.837674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.837681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.837687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.837700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.847573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.847641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.847655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.847661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.847667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.847680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.857618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.857708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.857722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.857728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.857734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.857748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.867685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.867758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.867771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.867777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.867783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.867796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.877737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.877792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.877805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.877812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.877817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.877830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.887747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.887828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.887842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.887848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.887854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.887868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.897768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.897843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.897857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.897863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.897869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.897882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.907808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.907862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.907875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.907881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.907888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.907901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.917837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.917891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.917908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.917914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.917920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.917933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.927860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.927944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.927958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.927964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.927970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.927984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.937819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.937877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.937890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.937896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.937902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.937916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.947923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.947993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.948006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.948013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.948019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.948032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.957964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.982 [2024-11-20 17:21:51.958016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.982 [2024-11-20 17:21:51.958029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.982 [2024-11-20 17:21:51.958036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.982 [2024-11-20 17:21:51.958042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.982 [2024-11-20 17:21:51.958059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.982 qpair failed and we were unable to recover it. 00:27:33.982 [2024-11-20 17:21:51.967987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.983 [2024-11-20 17:21:51.968039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.983 [2024-11-20 17:21:51.968053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.983 [2024-11-20 17:21:51.968059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.983 [2024-11-20 17:21:51.968065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.983 [2024-11-20 17:21:51.968079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.983 qpair failed and we were unable to recover it. 00:27:33.983 [2024-11-20 17:21:51.977954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.983 [2024-11-20 17:21:51.978044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.983 [2024-11-20 17:21:51.978057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.983 [2024-11-20 17:21:51.978064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.983 [2024-11-20 17:21:51.978069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.983 [2024-11-20 17:21:51.978083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.983 qpair failed and we were unable to recover it. 00:27:33.983 [2024-11-20 17:21:51.987957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.983 [2024-11-20 17:21:51.988012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.983 [2024-11-20 17:21:51.988026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.983 [2024-11-20 17:21:51.988032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.983 [2024-11-20 17:21:51.988038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.983 [2024-11-20 17:21:51.988052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.983 qpair failed and we were unable to recover it. 00:27:33.983 [2024-11-20 17:21:51.998063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.983 [2024-11-20 17:21:51.998126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.983 [2024-11-20 17:21:51.998140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.983 [2024-11-20 17:21:51.998146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.983 [2024-11-20 17:21:51.998152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.983 [2024-11-20 17:21:51.998165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.983 qpair failed and we were unable to recover it. 00:27:33.983 [2024-11-20 17:21:52.008080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.983 [2024-11-20 17:21:52.008133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.983 [2024-11-20 17:21:52.008147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.983 [2024-11-20 17:21:52.008153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.983 [2024-11-20 17:21:52.008159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:33.983 [2024-11-20 17:21:52.008174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.983 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.018113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.018175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.018221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.018230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.018238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.018256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.028125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.028216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.028232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.028239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.028244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.028259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.038109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.038165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.038179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.038186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.038192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.038212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.048294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.048380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.048397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.048404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.048410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.048424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.058254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.058335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.058349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.058355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.058361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.058375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.068172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.068220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.068234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.068240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.068246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.068260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.078291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.078347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.078360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.078367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.078373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.078387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.088312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.088400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.088414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.088420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.088426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.088442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.098345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.098402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.098415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.098422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.098428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.290 [2024-11-20 17:21:52.098441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.290 qpair failed and we were unable to recover it. 00:27:34.290 [2024-11-20 17:21:52.108370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.290 [2024-11-20 17:21:52.108469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.290 [2024-11-20 17:21:52.108482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.290 [2024-11-20 17:21:52.108488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.290 [2024-11-20 17:21:52.108495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.108508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.118342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.118395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.118407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.118414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.118420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.118434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.128425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.128483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.128499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.128506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.128512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.128527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.138442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.138497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.138512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.138519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.138525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.138539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.148423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.148476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.148490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.148497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.148503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.148517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.158586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.158658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.158672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.158679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.158685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.158699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.168562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.168662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.168675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.168682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.168688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.168702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.178505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.178563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.178579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.178586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.178592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.178605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.188626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.188674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.188687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.188694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.188699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.188713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.198630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.198683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.198696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.198702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.198707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.198721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.208677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.208734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.208746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.208753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.208758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.208771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.218740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.218804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.218817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.218823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.218829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.218846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.228726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.228774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.228788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.228794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.291 [2024-11-20 17:21:52.228800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.291 [2024-11-20 17:21:52.228814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.291 qpair failed and we were unable to recover it. 00:27:34.291 [2024-11-20 17:21:52.238769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.291 [2024-11-20 17:21:52.238823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.291 [2024-11-20 17:21:52.238837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.291 [2024-11-20 17:21:52.238843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.238849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.238862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.248802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.248865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.248878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.248884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.248890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.248904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.258804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.258857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.258871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.258878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.258884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.258897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.268770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.268853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.268867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.268875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.268881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.268897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.278821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.278876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.278889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.278896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.278901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.278915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.288823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.288875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.288888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.288894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.288900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.288912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.292 [2024-11-20 17:21:52.298930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.292 [2024-11-20 17:21:52.298984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.292 [2024-11-20 17:21:52.298998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.292 [2024-11-20 17:21:52.299004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.292 [2024-11-20 17:21:52.299010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.292 [2024-11-20 17:21:52.299024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.292 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.308928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.308974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.308993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.309000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.309006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.309020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.319000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.319056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.319069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.319076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.319082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.319095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.329021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.329071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.329085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.329091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.329097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.329111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.339087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.339139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.339153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.339160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.339165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.339180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.349075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.349128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.349142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.349149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.349158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.349172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.359118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.359171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.359184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.359190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.359196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.359214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.369138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.369193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.369210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.369217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.369222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.369236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.379253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.379312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.615 [2024-11-20 17:21:52.379325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.615 [2024-11-20 17:21:52.379331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.615 [2024-11-20 17:21:52.379337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.615 [2024-11-20 17:21:52.379351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.615 qpair failed and we were unable to recover it. 00:27:34.615 [2024-11-20 17:21:52.389224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.615 [2024-11-20 17:21:52.389280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.389293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.389300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.389305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.389318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.399320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.399426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.399440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.399448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.399453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.399468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.409322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.409376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.409390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.409396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.409402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.409416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.419273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.419326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.419339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.419346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.419351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.419365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.429307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.429360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.429373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.429380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.429385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.429399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.439352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.439407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.439424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.439431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.439437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.439450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.449372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.449423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.449436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.449442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.449448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.449462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.459392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.459441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.459454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.459461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.459466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.459480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.469422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.469478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.469491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.469498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.469504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.469517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.479393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.479444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.479458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.479464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.479473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.616 [2024-11-20 17:21:52.479487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.616 qpair failed and we were unable to recover it. 00:27:34.616 [2024-11-20 17:21:52.489483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.616 [2024-11-20 17:21:52.489580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.616 [2024-11-20 17:21:52.489593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.616 [2024-11-20 17:21:52.489600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.616 [2024-11-20 17:21:52.489605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.489620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.499534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.499590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.499603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.499610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.499615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.499629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.509524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.509576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.509589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.509595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.509601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.509615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.519617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.519675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.519688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.519695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.519701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.519715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.529520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.529572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.529585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.529592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.529598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.529611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.539619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.539708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.539721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.539728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.539734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.539747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.549657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.549708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.549722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.549728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.549734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.549747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.559605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.559657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.559671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.559677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.559683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.559697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.569700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.569752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.569769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.569775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.569781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.569795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.579729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.579799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.579811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.579818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.579823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.579836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.589759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.589852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.589865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.589872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.589877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.589891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.617 [2024-11-20 17:21:52.599809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.617 [2024-11-20 17:21:52.599864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.617 [2024-11-20 17:21:52.599877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.617 [2024-11-20 17:21:52.599883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.617 [2024-11-20 17:21:52.599889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.617 [2024-11-20 17:21:52.599903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.617 qpair failed and we were unable to recover it. 00:27:34.618 [2024-11-20 17:21:52.609822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.618 [2024-11-20 17:21:52.609874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.618 [2024-11-20 17:21:52.609887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.618 [2024-11-20 17:21:52.609894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.618 [2024-11-20 17:21:52.609903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.618 [2024-11-20 17:21:52.609916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.618 qpair failed and we were unable to recover it. 00:27:34.618 [2024-11-20 17:21:52.619837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.618 [2024-11-20 17:21:52.619920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.618 [2024-11-20 17:21:52.619934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.618 [2024-11-20 17:21:52.619941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.618 [2024-11-20 17:21:52.619947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.618 [2024-11-20 17:21:52.619962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.618 qpair failed and we were unable to recover it. 00:27:34.618 [2024-11-20 17:21:52.629865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.618 [2024-11-20 17:21:52.629936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.618 [2024-11-20 17:21:52.629949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.618 [2024-11-20 17:21:52.629956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.618 [2024-11-20 17:21:52.629961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.618 [2024-11-20 17:21:52.629975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.618 qpair failed and we were unable to recover it. 00:27:34.618 [2024-11-20 17:21:52.639903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.618 [2024-11-20 17:21:52.639977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.618 [2024-11-20 17:21:52.639991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.618 [2024-11-20 17:21:52.639998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.618 [2024-11-20 17:21:52.640004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.618 [2024-11-20 17:21:52.640018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.618 qpair failed and we were unable to recover it. 00:27:34.618 [2024-11-20 17:21:52.649932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.618 [2024-11-20 17:21:52.649985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.618 [2024-11-20 17:21:52.649999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.618 [2024-11-20 17:21:52.650006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.618 [2024-11-20 17:21:52.650011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.618 [2024-11-20 17:21:52.650025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.618 qpair failed and we were unable to recover it. 00:27:34.878 [2024-11-20 17:21:52.659962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.878 [2024-11-20 17:21:52.660063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.878 [2024-11-20 17:21:52.660077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.878 [2024-11-20 17:21:52.660083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.878 [2024-11-20 17:21:52.660089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.878 [2024-11-20 17:21:52.660103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.669984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.670054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.670068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.670075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.670080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.670094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.679992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.680079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.680092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.680098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.680104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.680117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.690042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.690094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.690108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.690114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.690121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.690134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.700089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.700141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.700157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.700164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.700170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.700183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.710034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.710084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.710097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.710104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.710110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.710123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.720183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.720288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.720302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.720308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.720315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.720329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.730163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.730221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.730235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.730242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.730247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.730261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.740159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.740239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.740253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.740260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.740268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.740282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.750140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.750227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.750241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.750247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.750253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.750266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.879 qpair failed and we were unable to recover it. 00:27:34.879 [2024-11-20 17:21:52.760232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.879 [2024-11-20 17:21:52.760314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.879 [2024-11-20 17:21:52.760326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.879 [2024-11-20 17:21:52.760333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.879 [2024-11-20 17:21:52.760338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.879 [2024-11-20 17:21:52.760352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.770279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.770335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.770348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.770355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.770361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.770374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.780344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.780405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.780418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.780425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.780430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.780444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.790369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.790421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.790434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.790441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.790447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.790461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.800375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.800430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.800442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.800449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.800455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.800468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.810427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.810483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.810496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.810503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.810509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.810522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.820422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.820480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.820493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.820499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.820505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.820518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.830443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.830497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.830513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.830520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.830525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.830539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.840477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.840531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.840545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.840552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.840558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.840571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.850508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.850563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.850576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.850582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.850588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.850602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.860554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.860608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.880 [2024-11-20 17:21:52.860621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.880 [2024-11-20 17:21:52.860627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.880 [2024-11-20 17:21:52.860633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.880 [2024-11-20 17:21:52.860647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.880 qpair failed and we were unable to recover it. 00:27:34.880 [2024-11-20 17:21:52.870585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.880 [2024-11-20 17:21:52.870645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.881 [2024-11-20 17:21:52.870658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.881 [2024-11-20 17:21:52.870665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.881 [2024-11-20 17:21:52.870673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.881 [2024-11-20 17:21:52.870687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.881 qpair failed and we were unable to recover it. 00:27:34.881 [2024-11-20 17:21:52.880628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.881 [2024-11-20 17:21:52.880685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.881 [2024-11-20 17:21:52.880698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.881 [2024-11-20 17:21:52.880704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.881 [2024-11-20 17:21:52.880710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.881 [2024-11-20 17:21:52.880723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.881 qpair failed and we were unable to recover it. 00:27:34.881 [2024-11-20 17:21:52.890640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.881 [2024-11-20 17:21:52.890696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.881 [2024-11-20 17:21:52.890709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.881 [2024-11-20 17:21:52.890716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.881 [2024-11-20 17:21:52.890722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.881 [2024-11-20 17:21:52.890734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.881 qpair failed and we were unable to recover it. 00:27:34.881 [2024-11-20 17:21:52.900644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.881 [2024-11-20 17:21:52.900694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.881 [2024-11-20 17:21:52.900708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.881 [2024-11-20 17:21:52.900714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.881 [2024-11-20 17:21:52.900720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.881 [2024-11-20 17:21:52.900734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.881 qpair failed and we were unable to recover it. 00:27:34.881 [2024-11-20 17:21:52.910667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.881 [2024-11-20 17:21:52.910720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.881 [2024-11-20 17:21:52.910733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.881 [2024-11-20 17:21:52.910740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.881 [2024-11-20 17:21:52.910746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:34.881 [2024-11-20 17:21:52.910760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.881 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.920717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.920815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.920830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.920836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.920842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.920856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.930778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.930837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.930851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.930857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.930863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.930876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.940761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.940810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.940823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.940830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.940836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.940850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.950794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.950850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.950863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.950870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.950876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.950890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.960819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.960873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.960892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.960899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.960904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.960917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.970858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.970914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.970927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.970933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.970939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.970953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.980877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.980952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.980966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.980973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.980978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.980992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:52.990901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:52.990953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:52.990966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:52.990973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:52.990979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:52.990993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:53.000937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:53.000991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:53.001005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:53.001012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:53.001021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:53.001035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:53.010955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:53.011005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:53.011018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:53.011025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.141 [2024-11-20 17:21:53.011031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.141 [2024-11-20 17:21:53.011045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.141 qpair failed and we were unable to recover it. 00:27:35.141 [2024-11-20 17:21:53.020990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.141 [2024-11-20 17:21:53.021046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.141 [2024-11-20 17:21:53.021060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.141 [2024-11-20 17:21:53.021066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.021072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.021086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.031013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.031072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.031085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.031091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.031097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.031111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.041065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.041121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.041135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.041141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.041147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.041160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.051051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.051110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.051124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.051131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.051136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.051150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.061023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.061076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.061089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.061096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.061101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.061115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.071134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.071190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.071207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.071214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.071220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.071233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.081164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.081222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.081236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.081242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.081248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.081262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.091197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.091276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.091293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.091300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.091306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.091320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.101218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.101289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.101303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.101309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.101315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.101329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.111248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.111297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.111311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.111317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.111323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.111338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.121276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.121364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.121377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.121384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.121390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.121403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.131301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.131354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.131369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.131376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.131385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.131400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.141332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.141410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.141423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.141430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.141436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.142 [2024-11-20 17:21:53.141450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.142 qpair failed and we were unable to recover it. 00:27:35.142 [2024-11-20 17:21:53.151376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.142 [2024-11-20 17:21:53.151429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.142 [2024-11-20 17:21:53.151442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.142 [2024-11-20 17:21:53.151449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.142 [2024-11-20 17:21:53.151454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.143 [2024-11-20 17:21:53.151469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.143 qpair failed and we were unable to recover it. 00:27:35.143 [2024-11-20 17:21:53.161434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.143 [2024-11-20 17:21:53.161490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.143 [2024-11-20 17:21:53.161504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.143 [2024-11-20 17:21:53.161510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.143 [2024-11-20 17:21:53.161516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.143 [2024-11-20 17:21:53.161530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.143 qpair failed and we were unable to recover it. 00:27:35.143 [2024-11-20 17:21:53.171385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.143 [2024-11-20 17:21:53.171462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.143 [2024-11-20 17:21:53.171476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.143 [2024-11-20 17:21:53.171482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.143 [2024-11-20 17:21:53.171488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.143 [2024-11-20 17:21:53.171502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.143 qpair failed and we were unable to recover it. 00:27:35.402 [2024-11-20 17:21:53.181448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.402 [2024-11-20 17:21:53.181504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.402 [2024-11-20 17:21:53.181517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.402 [2024-11-20 17:21:53.181524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.402 [2024-11-20 17:21:53.181530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.402 [2024-11-20 17:21:53.181544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.402 qpair failed and we were unable to recover it. 00:27:35.402 [2024-11-20 17:21:53.191523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.402 [2024-11-20 17:21:53.191575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.402 [2024-11-20 17:21:53.191588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.402 [2024-11-20 17:21:53.191595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.402 [2024-11-20 17:21:53.191601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.402 [2024-11-20 17:21:53.191614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.402 qpair failed and we were unable to recover it. 00:27:35.402 [2024-11-20 17:21:53.201530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.402 [2024-11-20 17:21:53.201586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.402 [2024-11-20 17:21:53.201600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.201606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.201612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.201625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.211559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.211632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.211646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.211652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.211658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.211672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.221514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.221569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.221586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.221594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.221599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.221614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.231622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.231677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.231691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.231698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.231704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.231718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.241589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.241672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.241685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.241692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.241698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.241712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.251573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.251654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.251668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.251675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.251680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.251694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.261626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.261678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.261693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.261700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.261709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.261723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.271718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.271770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.271784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.271791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.271797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.271811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.281688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.281746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.281759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.281768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.281775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.281789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.291717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.291771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.291784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.291791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.291797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.291811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.301733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.301785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.301798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.301805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.301811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.301825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.311820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.311880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.311893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.311899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.311905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.311920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.321802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.321854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.321867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.321874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.403 [2024-11-20 17:21:53.321880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.403 [2024-11-20 17:21:53.321893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.403 qpair failed and we were unable to recover it. 00:27:35.403 [2024-11-20 17:21:53.331890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.403 [2024-11-20 17:21:53.331941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.403 [2024-11-20 17:21:53.331955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.403 [2024-11-20 17:21:53.331961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.331967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.331981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.341861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.341952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.341966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.341972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.341978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.341992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.351982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.352037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.352054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.352061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.352066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.352081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.361909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.361966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.361979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.361986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.361992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.362006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.372021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.372083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.372096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.372103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.372109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.372122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.382049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.382108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.382121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.382128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.382134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.382147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.392061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.392115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.392128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.392135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.392143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.392157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.402104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.402162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.402175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.402182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.402188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.402206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.412130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.412190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.412207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.412215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.412220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.412235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.422115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.422174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.422188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.422195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.422200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.422221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.404 [2024-11-20 17:21:53.432192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.404 [2024-11-20 17:21:53.432248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.404 [2024-11-20 17:21:53.432263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.404 [2024-11-20 17:21:53.432269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.404 [2024-11-20 17:21:53.432275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.404 [2024-11-20 17:21:53.432289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.404 qpair failed and we were unable to recover it. 00:27:35.664 [2024-11-20 17:21:53.442235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.664 [2024-11-20 17:21:53.442295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.664 [2024-11-20 17:21:53.442308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.664 [2024-11-20 17:21:53.442315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.664 [2024-11-20 17:21:53.442320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.664 [2024-11-20 17:21:53.442334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.664 qpair failed and we were unable to recover it. 00:27:35.664 [2024-11-20 17:21:53.452239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.664 [2024-11-20 17:21:53.452292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.664 [2024-11-20 17:21:53.452306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.664 [2024-11-20 17:21:53.452313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.664 [2024-11-20 17:21:53.452319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.664 [2024-11-20 17:21:53.452333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.664 qpair failed and we were unable to recover it. 00:27:35.664 [2024-11-20 17:21:53.462262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.664 [2024-11-20 17:21:53.462316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.664 [2024-11-20 17:21:53.462330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.664 [2024-11-20 17:21:53.462337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.664 [2024-11-20 17:21:53.462342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.664 [2024-11-20 17:21:53.462356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.664 qpair failed and we were unable to recover it. 00:27:35.664 [2024-11-20 17:21:53.472283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.664 [2024-11-20 17:21:53.472335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.472349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.472356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.472362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.472375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.482316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.482374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.482391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.482398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.482403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.482418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.492371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.492424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.492437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.492444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.492450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.492464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.502355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.502459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.502473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.502479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.502485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.502499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.512346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.512396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.512409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.512416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.512422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.512436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.522408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.522463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.522477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.522484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.522492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.522506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.532496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.532559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.532572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.532579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.532585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.532598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.542491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.542561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.542575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.542582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.542587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.542601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.552534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.552583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.552596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.552603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.552608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.552622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.562531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.562586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.562600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.562606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.562612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.562626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.572573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.572631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.572644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.572650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.572657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.572671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.582607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.582660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.582673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.582679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.582685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.582699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.592630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.665 [2024-11-20 17:21:53.592683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.665 [2024-11-20 17:21:53.592697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.665 [2024-11-20 17:21:53.592703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.665 [2024-11-20 17:21:53.592709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.665 [2024-11-20 17:21:53.592723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.665 qpair failed and we were unable to recover it. 00:27:35.665 [2024-11-20 17:21:53.602667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.602726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.602739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.602746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.602752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.602766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.612617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.612674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.612692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.612699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.612705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.612718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.622721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.622773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.622786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.622793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.622799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.622812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.632742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.632794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.632808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.632815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.632820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.632834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.642786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.642844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.642857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.642864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.642870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.642884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.652833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.652936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.652949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.652955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.652964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.652978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.662817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.662874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.662888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.662895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.662900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.662914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.672851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.672904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.672916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.672923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.672929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.672942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.682880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.682954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.682968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.682974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.682980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.682993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.692916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.692974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.692988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.692994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.693000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.693014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.666 [2024-11-20 17:21:53.702940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.666 [2024-11-20 17:21:53.702995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.666 [2024-11-20 17:21:53.703009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.666 [2024-11-20 17:21:53.703016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.666 [2024-11-20 17:21:53.703022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.666 [2024-11-20 17:21:53.703036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.666 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.712968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.713020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.713034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.713040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.713046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.713060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.722923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.722982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.722996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.723003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.723009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.723023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.733026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.733080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.733094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.733101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.733107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.733121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.743051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.743102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.743120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.743127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.743132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.743147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.753067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.753133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.753148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.753154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.753160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.753174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-11-20 17:21:53.763114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.926 [2024-11-20 17:21:53.763174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.926 [2024-11-20 17:21:53.763188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.926 [2024-11-20 17:21:53.763194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.926 [2024-11-20 17:21:53.763200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.926 [2024-11-20 17:21:53.763219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.773133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.773190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.773207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.773214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.773219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.773233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.783169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.783230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.783244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.783251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.783260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.783275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.793224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.793279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.793292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.793299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.793305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.793319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.803231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.803288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.803301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.803308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.803314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.803328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.813248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.813299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.813312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.813318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.813324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.813338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.823191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.823249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.823262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.823269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.823274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.823288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.833290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.833346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.833360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.833367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.833372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.833386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.843391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.843493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.843506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.843513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.843519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.843533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.853363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.853431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.853444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.853451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.853457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.853471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.863411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.863471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.863483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.863490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.863495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.863509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.873407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.873461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.873477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.873484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.873490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.873504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.883488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.883601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.883614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.883621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.883626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.927 [2024-11-20 17:21:53.883641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-11-20 17:21:53.893523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.927 [2024-11-20 17:21:53.893574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.927 [2024-11-20 17:21:53.893588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.927 [2024-11-20 17:21:53.893594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.927 [2024-11-20 17:21:53.893600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.893614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.903439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.903492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.903505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.903512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.903518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.903532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.913538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.913586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.913599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.913606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.913615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.913628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.923563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.923624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.923638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.923644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.923650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.923664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.933511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.933571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.933585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.933592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.933597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.933611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.943614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.943664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.943678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.943684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.943690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.943703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.953639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.953688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.953701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.953707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.953713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.953727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-11-20 17:21:53.963655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.928 [2024-11-20 17:21:53.963708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.928 [2024-11-20 17:21:53.963721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.928 [2024-11-20 17:21:53.963728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.928 [2024-11-20 17:21:53.963733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:35.928 [2024-11-20 17:21:53.963748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:35.928 qpair failed and we were unable to recover it. 00:27:36.189 [2024-11-20 17:21:53.973698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.189 [2024-11-20 17:21:53.973750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.189 [2024-11-20 17:21:53.973763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.189 [2024-11-20 17:21:53.973770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.189 [2024-11-20 17:21:53.973775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.189 [2024-11-20 17:21:53.973789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.189 qpair failed and we were unable to recover it. 00:27:36.189 [2024-11-20 17:21:53.983724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.189 [2024-11-20 17:21:53.983773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.189 [2024-11-20 17:21:53.983786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.189 [2024-11-20 17:21:53.983793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.189 [2024-11-20 17:21:53.983798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.189 [2024-11-20 17:21:53.983812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.189 qpair failed and we were unable to recover it. 00:27:36.189 [2024-11-20 17:21:53.993677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.189 [2024-11-20 17:21:53.993732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.189 [2024-11-20 17:21:53.993745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.189 [2024-11-20 17:21:53.993752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.189 [2024-11-20 17:21:53.993757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.189 [2024-11-20 17:21:53.993771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.189 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.003789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.003841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.003857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.003864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.003870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.003883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.013805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.013858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.013871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.013878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.013884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.013897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.023851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.023925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.023938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.023944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.023950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.023963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.033872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.033926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.033939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.033945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.033951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.033964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.043906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.043997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.044011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.044017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.044026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.044039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.053927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.053981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.053994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.054000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.054006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.054019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.063939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.063989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.064003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.064009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.064015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.064028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.073892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.073952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.073965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.073972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.073977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.073990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.083932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.083985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.083998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.084004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.190 [2024-11-20 17:21:54.084009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.190 [2024-11-20 17:21:54.084023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.190 qpair failed and we were unable to recover it. 00:27:36.190 [2024-11-20 17:21:54.094025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.190 [2024-11-20 17:21:54.094078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.190 [2024-11-20 17:21:54.094091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.190 [2024-11-20 17:21:54.094098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.094104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.094117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.104087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.104142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.104155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.104163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.104169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.104183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.114076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.114133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.114147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.114154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.114159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.114173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.124112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.124167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.124183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.124191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.124197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.124216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.134138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.134193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.134213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.134220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.134226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.134240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.144083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.144138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.144151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.144158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.144164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.144177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.154191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.154266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.154280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.154286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.154292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.154305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.164252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.164353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.164367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.164373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.164378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.164393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.174240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.174294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.174307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.174317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.174323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.174337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.184270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.184325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.184338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.184345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.184351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.184364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.194301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.194353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.191 [2024-11-20 17:21:54.194366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.191 [2024-11-20 17:21:54.194372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.191 [2024-11-20 17:21:54.194378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.191 [2024-11-20 17:21:54.194392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.191 qpair failed and we were unable to recover it. 00:27:36.191 [2024-11-20 17:21:54.204333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.191 [2024-11-20 17:21:54.204402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.192 [2024-11-20 17:21:54.204415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.192 [2024-11-20 17:21:54.204422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.192 [2024-11-20 17:21:54.204427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.192 [2024-11-20 17:21:54.204441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.192 qpair failed and we were unable to recover it. 00:27:36.192 [2024-11-20 17:21:54.214342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.192 [2024-11-20 17:21:54.214399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.192 [2024-11-20 17:21:54.214413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.192 [2024-11-20 17:21:54.214419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.192 [2024-11-20 17:21:54.214424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.192 [2024-11-20 17:21:54.214438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.192 qpair failed and we were unable to recover it. 00:27:36.192 [2024-11-20 17:21:54.224305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.192 [2024-11-20 17:21:54.224360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.192 [2024-11-20 17:21:54.224374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.192 [2024-11-20 17:21:54.224381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.192 [2024-11-20 17:21:54.224386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.192 [2024-11-20 17:21:54.224399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.192 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.234403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.234453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.234467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.234474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.234479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.234493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.244467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.244523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.244536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.244542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.244548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.244561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.254462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.254515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.254528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.254534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.254540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.254554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.264524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.264574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.264592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.264598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.264604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.264618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.274514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.274567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.274580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.274587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.274593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.274607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.284550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.284604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.284617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.284623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.284629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.284643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.294493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.294557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.294571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.294577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.294583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.294598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.304641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.304700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.304714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.304724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.304730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.304744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.314635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.314684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.314698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.452 [2024-11-20 17:21:54.314705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.452 [2024-11-20 17:21:54.314711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.452 [2024-11-20 17:21:54.314725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.452 qpair failed and we were unable to recover it. 00:27:36.452 [2024-11-20 17:21:54.324600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.452 [2024-11-20 17:21:54.324654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.452 [2024-11-20 17:21:54.324667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.324674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.324680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.324694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.334700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.334757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.334770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.334777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.334783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.334797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.344638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.344705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.344718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.344725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.344731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.344745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.354723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.354771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.354785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.354792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.354798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.354811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.364813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.364865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.364879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.364886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.364892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.364906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.374858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.374912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.374927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.374934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.374940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.374954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.384828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.384879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.384892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.384898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.384904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.384918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.394914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.394971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.394988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.394994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.395000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.395014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.404901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.404955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.404968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.404974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.404980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.404994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.414967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.415070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.415083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.415089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.415095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.415109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.424953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.425003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.425016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.425023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.425029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.425042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.434966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.435048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.435062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.435072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.435077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.435091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.445009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.445062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.445076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.445083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.453 [2024-11-20 17:21:54.445089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.453 [2024-11-20 17:21:54.445103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.453 qpair failed and we were unable to recover it. 00:27:36.453 [2024-11-20 17:21:54.454955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.453 [2024-11-20 17:21:54.455009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.453 [2024-11-20 17:21:54.455022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.453 [2024-11-20 17:21:54.455029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.454 [2024-11-20 17:21:54.455035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.454 [2024-11-20 17:21:54.455049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.454 qpair failed and we were unable to recover it. 00:27:36.454 [2024-11-20 17:21:54.464984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.454 [2024-11-20 17:21:54.465040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.454 [2024-11-20 17:21:54.465054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.454 [2024-11-20 17:21:54.465061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.454 [2024-11-20 17:21:54.465067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.454 [2024-11-20 17:21:54.465081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.454 qpair failed and we were unable to recover it. 00:27:36.454 [2024-11-20 17:21:54.475099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.454 [2024-11-20 17:21:54.475159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.454 [2024-11-20 17:21:54.475172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.454 [2024-11-20 17:21:54.475179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.454 [2024-11-20 17:21:54.475184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.454 [2024-11-20 17:21:54.475198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.454 qpair failed and we were unable to recover it. 00:27:36.454 [2024-11-20 17:21:54.485040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.454 [2024-11-20 17:21:54.485093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.454 [2024-11-20 17:21:54.485106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.454 [2024-11-20 17:21:54.485113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.454 [2024-11-20 17:21:54.485119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.454 [2024-11-20 17:21:54.485133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.454 qpair failed and we were unable to recover it. 00:27:36.714 [2024-11-20 17:21:54.495140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.714 [2024-11-20 17:21:54.495214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.495228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.495234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.495240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.495254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.505089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.505143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.505156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.505162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.505168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.505182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.515192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.515250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.515263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.515270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.515275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.515289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.525234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.525301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.525322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.525330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.525335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.525351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.535261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.535316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.535330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.535336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.535342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.535356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.545280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.545369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.545382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.545389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.545395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.545409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.555311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.555363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.555376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.555383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.555389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.555402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.565335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.565391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.565404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.565414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.565420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.565433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.575367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.575423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.575437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.575444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.575449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.575463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.585397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.585446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.585459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.585466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.585472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.585485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.595453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.595508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.595521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.595528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.595533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.595547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.605472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.605528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.605541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.605547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.605553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.715 [2024-11-20 17:21:54.605566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.715 qpair failed and we were unable to recover it. 00:27:36.715 [2024-11-20 17:21:54.615512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.715 [2024-11-20 17:21:54.615574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.715 [2024-11-20 17:21:54.615588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.715 [2024-11-20 17:21:54.615595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.715 [2024-11-20 17:21:54.615601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.615614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.625459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.625515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.625528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.625535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.625541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.625556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.635536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.635633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.635649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.635656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.635662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.635677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.645562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.645620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.645634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.645642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.645648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.645663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.655647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.655710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.655725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.655733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.655738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.655753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.665638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.665711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.665725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.665732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.665737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.665751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.675588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.675634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.675648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.675654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.675660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.675673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.685688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.685743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.685756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.685763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.685769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.685782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.695745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.695807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.695821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.695831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.695837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.695852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.705659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.705712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.705726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.705733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.705738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.705752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.715779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.715839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.715852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.715858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.715864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.715879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.725808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.725863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.725876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.725883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.725889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.725902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.735756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.735808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.735822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.735829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.735835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.716 [2024-11-20 17:21:54.735848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.716 qpair failed and we were unable to recover it. 00:27:36.716 [2024-11-20 17:21:54.745835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.716 [2024-11-20 17:21:54.745887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.716 [2024-11-20 17:21:54.745900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.716 [2024-11-20 17:21:54.745907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.716 [2024-11-20 17:21:54.745913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.717 [2024-11-20 17:21:54.745927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.717 qpair failed and we were unable to recover it. 00:27:36.976 [2024-11-20 17:21:54.755801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.976 [2024-11-20 17:21:54.755856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.976 [2024-11-20 17:21:54.755869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.976 [2024-11-20 17:21:54.755875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.976 [2024-11-20 17:21:54.755881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.976 [2024-11-20 17:21:54.755894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.976 qpair failed and we were unable to recover it. 00:27:36.976 [2024-11-20 17:21:54.765920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.976 [2024-11-20 17:21:54.765993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.976 [2024-11-20 17:21:54.766006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.976 [2024-11-20 17:21:54.766013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.766019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.766032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.775937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.775994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.776007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.776015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.776021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.776035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.785959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.786016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.786030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.786036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.786042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.786056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.796030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.796084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.796097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.796103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.796109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.796122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.805967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.806019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.806032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.806039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.806045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.806059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.815982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.816035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.816048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.816055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.816060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.816074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.826014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.826066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.826080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.826090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.826096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.826109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.836067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.836119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.836133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.836139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.836145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.836159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.846062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.977 [2024-11-20 17:21:54.846124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.977 [2024-11-20 17:21:54.846138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.977 [2024-11-20 17:21:54.846145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.977 [2024-11-20 17:21:54.846150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.977 [2024-11-20 17:21:54.846165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.977 qpair failed and we were unable to recover it. 00:27:36.977 [2024-11-20 17:21:54.856164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.856219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.856233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.856239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.856245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.856258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.866145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.866233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.866247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.866253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.866259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.866276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.876217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.876271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.876284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.876291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.876297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.876310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.886264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.886320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.886334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.886341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.886347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.886361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.896286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.896351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.896365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.896371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.896377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.896391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.906283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.906332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.906346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.906353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.906359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.906373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.916269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.916337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.916351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.916357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.916363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.916377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.926323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.926377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.926391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.926397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.926403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.926416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.936336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.936388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.978 [2024-11-20 17:21:54.936402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.978 [2024-11-20 17:21:54.936409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.978 [2024-11-20 17:21:54.936415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.978 [2024-11-20 17:21:54.936428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.978 qpair failed and we were unable to recover it. 00:27:36.978 [2024-11-20 17:21:54.946421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.978 [2024-11-20 17:21:54.946470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.946484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.946490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.946496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.946510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:54.956447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:54.956502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.956515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.956525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.956531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.956545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:54.966487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:54.966542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.966555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.966562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.966568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.966582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:54.976499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:54.976557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.976570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.976577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.976583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.976596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:54.986600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:54.986661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.986675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.986682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.986687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.986700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:54.996512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:54.996572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:54.996586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:54.996592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:54.996598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:54.996615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:36.979 [2024-11-20 17:21:55.006635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.979 [2024-11-20 17:21:55.006693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.979 [2024-11-20 17:21:55.006706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.979 [2024-11-20 17:21:55.006713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.979 [2024-11-20 17:21:55.006719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:36.979 [2024-11-20 17:21:55.006733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:36.979 qpair failed and we were unable to recover it. 00:27:37.239 [2024-11-20 17:21:55.016567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.239 [2024-11-20 17:21:55.016625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.239 [2024-11-20 17:21:55.016638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.239 [2024-11-20 17:21:55.016646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.239 [2024-11-20 17:21:55.016651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.239 [2024-11-20 17:21:55.016665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.239 qpair failed and we were unable to recover it. 00:27:37.239 [2024-11-20 17:21:55.026587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.239 [2024-11-20 17:21:55.026645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.239 [2024-11-20 17:21:55.026659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.239 [2024-11-20 17:21:55.026666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.239 [2024-11-20 17:21:55.026672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.239 [2024-11-20 17:21:55.026686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.239 qpair failed and we were unable to recover it. 00:27:37.239 [2024-11-20 17:21:55.036670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.239 [2024-11-20 17:21:55.036722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.239 [2024-11-20 17:21:55.036735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.239 [2024-11-20 17:21:55.036742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.239 [2024-11-20 17:21:55.036748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.239 [2024-11-20 17:21:55.036762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.239 qpair failed and we were unable to recover it. 00:27:37.239 [2024-11-20 17:21:55.046706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.239 [2024-11-20 17:21:55.046776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.239 [2024-11-20 17:21:55.046789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.239 [2024-11-20 17:21:55.046796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.239 [2024-11-20 17:21:55.046801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.239 [2024-11-20 17:21:55.046815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.239 qpair failed and we were unable to recover it. 00:27:37.239 [2024-11-20 17:21:55.056724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.239 [2024-11-20 17:21:55.056782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.239 [2024-11-20 17:21:55.056795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.056802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.056808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.056822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.066754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.066829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.066843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.066849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.066855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.066869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.076783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.076834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.076848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.076854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.076861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.076875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.086874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.086979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.086992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.087002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.087009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.087022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.096850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.096920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.096933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.096940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.096946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.096959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.106892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.106942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.106956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.106962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.106968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.106982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.116920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.116974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.116987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.116994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.116999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.117013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.126934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.126990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.127005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.127012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.127019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.127039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.136960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.137008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.137022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.137029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.137035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.137049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.146963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.147017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.147031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.147039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.147044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.147058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.156938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.156985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.156998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.157005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.157010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.157024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.240 [2024-11-20 17:21:55.167055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.240 [2024-11-20 17:21:55.167129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.240 [2024-11-20 17:21:55.167143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.240 [2024-11-20 17:21:55.167149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.240 [2024-11-20 17:21:55.167155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.240 [2024-11-20 17:21:55.167169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.240 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.177077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.177173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.177187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.177194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.177199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.177216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.187110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.187173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.187187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.187193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.187199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.187217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.197145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.197230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.197244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.197251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.197257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.197272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.207178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.207240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.207253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.207260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.207265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.207279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.217220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.217281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.217296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.217306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.217312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.217326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.227274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.227327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.227340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.227347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.227352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.227365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.237257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.237309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.237323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.237331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.237337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.237351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.247313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.247382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.247395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.247401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.247407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.247421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.257296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.257402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.257416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.257422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.257428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.257445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.267339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.267393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.267408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.267415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.267421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.267434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.241 [2024-11-20 17:21:55.277424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.241 [2024-11-20 17:21:55.277483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.241 [2024-11-20 17:21:55.277497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.241 [2024-11-20 17:21:55.277503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.241 [2024-11-20 17:21:55.277509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.241 [2024-11-20 17:21:55.277523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.241 qpair failed and we were unable to recover it. 00:27:37.501 [2024-11-20 17:21:55.287418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.501 [2024-11-20 17:21:55.287474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.501 [2024-11-20 17:21:55.287488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.501 [2024-11-20 17:21:55.287494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.501 [2024-11-20 17:21:55.287500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.501 [2024-11-20 17:21:55.287514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.501 qpair failed and we were unable to recover it. 00:27:37.501 [2024-11-20 17:21:55.297436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.501 [2024-11-20 17:21:55.297514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.501 [2024-11-20 17:21:55.297527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.501 [2024-11-20 17:21:55.297534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.501 [2024-11-20 17:21:55.297540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.501 [2024-11-20 17:21:55.297554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.501 qpair failed and we were unable to recover it. 00:27:37.501 [2024-11-20 17:21:55.307508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.501 [2024-11-20 17:21:55.307610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.501 [2024-11-20 17:21:55.307623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.501 [2024-11-20 17:21:55.307630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.501 [2024-11-20 17:21:55.307636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.501 [2024-11-20 17:21:55.307649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.501 qpair failed and we were unable to recover it. 00:27:37.501 [2024-11-20 17:21:55.317482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.501 [2024-11-20 17:21:55.317537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.501 [2024-11-20 17:21:55.317550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.501 [2024-11-20 17:21:55.317556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.501 [2024-11-20 17:21:55.317562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.501 [2024-11-20 17:21:55.317576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.501 qpair failed and we were unable to recover it. 00:27:37.501 [2024-11-20 17:21:55.327512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.501 [2024-11-20 17:21:55.327591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.502 [2024-11-20 17:21:55.327605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.502 [2024-11-20 17:21:55.327612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.502 [2024-11-20 17:21:55.327618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.502 [2024-11-20 17:21:55.327631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.502 qpair failed and we were unable to recover it. 00:27:37.502 [2024-11-20 17:21:55.337549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.502 [2024-11-20 17:21:55.337602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.502 [2024-11-20 17:21:55.337616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.502 [2024-11-20 17:21:55.337622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.502 [2024-11-20 17:21:55.337628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.502 [2024-11-20 17:21:55.337642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.502 qpair failed and we were unable to recover it. 00:27:37.502 [2024-11-20 17:21:55.347562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.502 [2024-11-20 17:21:55.347614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.502 [2024-11-20 17:21:55.347630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.502 [2024-11-20 17:21:55.347642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.502 [2024-11-20 17:21:55.347648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.502 [2024-11-20 17:21:55.347663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.502 qpair failed and we were unable to recover it. 00:27:37.502 [2024-11-20 17:21:55.357616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.502 [2024-11-20 17:21:55.357716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.502 [2024-11-20 17:21:55.357730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.502 [2024-11-20 17:21:55.357736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.502 [2024-11-20 17:21:55.357742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.502 [2024-11-20 17:21:55.357756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.502 qpair failed and we were unable to recover it. 00:27:37.502 [2024-11-20 17:21:55.367661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.502 [2024-11-20 17:21:55.367718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.502 [2024-11-20 17:21:55.367730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.502 [2024-11-20 17:21:55.367737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.502 [2024-11-20 17:21:55.367742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14e4ba0 00:27:37.502 [2024-11-20 17:21:55.367755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.502 qpair failed and we were unable to recover it. 00:27:37.502 [2024-11-20 17:21:55.367895] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:37.502 A controller has encountered a failure and is being reset. 00:27:37.502 Controller properly reset. 00:27:37.502 Initializing NVMe Controllers 00:27:37.502 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:37.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:37.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:37.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:37.502 Initialization complete. Launching workers. 00:27:37.502 Starting thread on core 1 00:27:37.502 Starting thread on core 2 00:27:37.502 Starting thread on core 3 00:27:37.502 Starting thread on core 0 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:37.502 00:27:37.502 real 0m10.634s 00:27:37.502 user 0m19.367s 00:27:37.502 sys 0m4.716s 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.502 ************************************ 00:27:37.502 END TEST nvmf_target_disconnect_tc2 00:27:37.502 ************************************ 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:37.502 rmmod nvme_tcp 00:27:37.502 rmmod nvme_fabrics 00:27:37.502 rmmod nvme_keyring 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2655805 ']' 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2655805 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2655805 ']' 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2655805 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.502 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655805 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655805' 00:27:37.762 killing process with pid 2655805 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2655805 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2655805 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.762 17:21:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.297 00:27:40.297 real 0m19.412s 00:27:40.297 user 0m46.535s 00:27:40.297 sys 0m9.605s 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:40.297 ************************************ 00:27:40.297 END TEST nvmf_target_disconnect 00:27:40.297 ************************************ 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:40.297 00:27:40.297 real 5m54.040s 00:27:40.297 user 10m35.830s 00:27:40.297 sys 1m58.899s 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.297 17:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.297 ************************************ 00:27:40.297 END TEST nvmf_host 00:27:40.297 ************************************ 00:27:40.297 17:21:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:40.297 17:21:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:40.297 17:21:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:40.297 17:21:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:40.297 17:21:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.297 17:21:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.297 ************************************ 00:27:40.297 START TEST nvmf_target_core_interrupt_mode 00:27:40.297 ************************************ 00:27:40.297 17:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:40.297 * Looking for test storage... 00:27:40.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:40.297 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.298 --rc genhtml_branch_coverage=1 00:27:40.298 --rc genhtml_function_coverage=1 00:27:40.298 --rc genhtml_legend=1 00:27:40.298 --rc geninfo_all_blocks=1 00:27:40.298 --rc geninfo_unexecuted_blocks=1 00:27:40.298 00:27:40.298 ' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.298 --rc genhtml_branch_coverage=1 00:27:40.298 --rc genhtml_function_coverage=1 00:27:40.298 --rc genhtml_legend=1 00:27:40.298 --rc geninfo_all_blocks=1 00:27:40.298 --rc geninfo_unexecuted_blocks=1 00:27:40.298 00:27:40.298 ' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.298 --rc genhtml_branch_coverage=1 00:27:40.298 --rc genhtml_function_coverage=1 00:27:40.298 --rc genhtml_legend=1 00:27:40.298 --rc geninfo_all_blocks=1 00:27:40.298 --rc geninfo_unexecuted_blocks=1 00:27:40.298 00:27:40.298 ' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.298 --rc genhtml_branch_coverage=1 00:27:40.298 --rc genhtml_function_coverage=1 00:27:40.298 --rc genhtml_legend=1 00:27:40.298 --rc geninfo_all_blocks=1 00:27:40.298 --rc geninfo_unexecuted_blocks=1 00:27:40.298 00:27:40.298 ' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:40.298 ************************************ 00:27:40.298 START TEST nvmf_abort 00:27:40.298 ************************************ 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:40.298 * Looking for test storage... 00:27:40.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:40.298 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.557 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.558 --rc genhtml_branch_coverage=1 00:27:40.558 --rc genhtml_function_coverage=1 00:27:40.558 --rc genhtml_legend=1 00:27:40.558 --rc geninfo_all_blocks=1 00:27:40.558 --rc geninfo_unexecuted_blocks=1 00:27:40.558 00:27:40.558 ' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.558 --rc genhtml_branch_coverage=1 00:27:40.558 --rc genhtml_function_coverage=1 00:27:40.558 --rc genhtml_legend=1 00:27:40.558 --rc geninfo_all_blocks=1 00:27:40.558 --rc geninfo_unexecuted_blocks=1 00:27:40.558 00:27:40.558 ' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.558 --rc genhtml_branch_coverage=1 00:27:40.558 --rc genhtml_function_coverage=1 00:27:40.558 --rc genhtml_legend=1 00:27:40.558 --rc geninfo_all_blocks=1 00:27:40.558 --rc geninfo_unexecuted_blocks=1 00:27:40.558 00:27:40.558 ' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.558 --rc genhtml_branch_coverage=1 00:27:40.558 --rc genhtml_function_coverage=1 00:27:40.558 --rc genhtml_legend=1 00:27:40.558 --rc geninfo_all_blocks=1 00:27:40.558 --rc geninfo_unexecuted_blocks=1 00:27:40.558 00:27:40.558 ' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:40.558 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:40.559 17:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:47.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:47.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:47.125 Found net devices under 0000:86:00.0: cvl_0_0 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:47.125 Found net devices under 0000:86:00.1: cvl_0_1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.125 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:27:47.126 00:27:47.126 --- 10.0.0.2 ping statistics --- 00:27:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.126 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:27:47.126 00:27:47.126 --- 10.0.0.1 ping statistics --- 00:27:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.126 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2660393 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2660393 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2660393 ']' 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 [2024-11-20 17:22:04.400254] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:47.126 [2024-11-20 17:22:04.401193] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:47.126 [2024-11-20 17:22:04.401237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.126 [2024-11-20 17:22:04.479994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:47.126 [2024-11-20 17:22:04.521003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.126 [2024-11-20 17:22:04.521037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.126 [2024-11-20 17:22:04.521045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.126 [2024-11-20 17:22:04.521051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.126 [2024-11-20 17:22:04.521056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.126 [2024-11-20 17:22:04.522392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.126 [2024-11-20 17:22:04.522498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.126 [2024-11-20 17:22:04.522499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.126 [2024-11-20 17:22:04.589286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:47.126 [2024-11-20 17:22:04.590179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:47.126 [2024-11-20 17:22:04.590317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:47.126 [2024-11-20 17:22:04.590468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 [2024-11-20 17:22:04.655278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 Malloc0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 Delay0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 [2024-11-20 17:22:04.735180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.126 17:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:47.126 [2024-11-20 17:22:04.862510] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:49.026 Initializing NVMe Controllers 00:27:49.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:49.026 controller IO queue size 128 less than required 00:27:49.026 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:49.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:49.026 Initialization complete. Launching workers. 00:27:49.026 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37926 00:27:49.026 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37983, failed to submit 66 00:27:49.026 success 37926, unsuccessful 57, failed 0 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.026 rmmod nvme_tcp 00:27:49.026 rmmod nvme_fabrics 00:27:49.026 rmmod nvme_keyring 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2660393 ']' 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2660393 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2660393 ']' 00:27:49.026 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2660393 00:27:49.027 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:49.027 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.027 17:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660393 00:27:49.027 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.027 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.027 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660393' 00:27:49.027 killing process with pid 2660393 00:27:49.027 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2660393 00:27:49.027 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2660393 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.285 17:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.822 00:27:51.822 real 0m11.076s 00:27:51.822 user 0m10.286s 00:27:51.822 sys 0m5.567s 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:51.822 ************************************ 00:27:51.822 END TEST nvmf_abort 00:27:51.822 ************************************ 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:51.822 ************************************ 00:27:51.822 START TEST nvmf_ns_hotplug_stress 00:27:51.822 ************************************ 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:51.822 * Looking for test storage... 00:27:51.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:51.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.822 --rc genhtml_branch_coverage=1 00:27:51.822 --rc genhtml_function_coverage=1 00:27:51.822 --rc genhtml_legend=1 00:27:51.822 --rc geninfo_all_blocks=1 00:27:51.822 --rc geninfo_unexecuted_blocks=1 00:27:51.822 00:27:51.822 ' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:51.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.822 --rc genhtml_branch_coverage=1 00:27:51.822 --rc genhtml_function_coverage=1 00:27:51.822 --rc genhtml_legend=1 00:27:51.822 --rc geninfo_all_blocks=1 00:27:51.822 --rc geninfo_unexecuted_blocks=1 00:27:51.822 00:27:51.822 ' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:51.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.822 --rc genhtml_branch_coverage=1 00:27:51.822 --rc genhtml_function_coverage=1 00:27:51.822 --rc genhtml_legend=1 00:27:51.822 --rc geninfo_all_blocks=1 00:27:51.822 --rc geninfo_unexecuted_blocks=1 00:27:51.822 00:27:51.822 ' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:51.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.822 --rc genhtml_branch_coverage=1 00:27:51.822 --rc genhtml_function_coverage=1 00:27:51.822 --rc genhtml_legend=1 00:27:51.822 --rc geninfo_all_blocks=1 00:27:51.822 --rc geninfo_unexecuted_blocks=1 00:27:51.822 00:27:51.822 ' 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:51.822 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.823 17:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.393 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.394 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.394 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:27:58.394 00:27:58.394 --- 10.0.0.2 ping statistics --- 00:27:58.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.394 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:58.394 00:27:58.394 --- 10.0.0.1 ping statistics --- 00:27:58.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.394 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2664391 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2664391 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2664391 ']' 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.394 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 [2024-11-20 17:22:15.547937] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:58.395 [2024-11-20 17:22:15.548826] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:27:58.395 [2024-11-20 17:22:15.548860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.395 [2024-11-20 17:22:15.623572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.395 [2024-11-20 17:22:15.664866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.395 [2024-11-20 17:22:15.664900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.395 [2024-11-20 17:22:15.664907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.395 [2024-11-20 17:22:15.664913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.395 [2024-11-20 17:22:15.664917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.395 [2024-11-20 17:22:15.666248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.395 [2024-11-20 17:22:15.666356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.395 [2024-11-20 17:22:15.666357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.395 [2024-11-20 17:22:15.732787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.395 [2024-11-20 17:22:15.733561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:58.395 [2024-11-20 17:22:15.734018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:58.395 [2024-11-20 17:22:15.734108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:58.395 [2024-11-20 17:22:15.959091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.395 17:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:58.395 17:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.395 [2024-11-20 17:22:16.339550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.395 17:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.654 17:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:58.912 Malloc0 00:27:58.912 17:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:58.912 Delay0 00:27:58.912 17:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.171 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:59.429 NULL1 00:27:59.429 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:59.687 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2664657 00:27:59.687 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:59.687 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:27:59.687 17:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.058 Read completed with error (sct=0, sc=11) 00:28:01.058 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.058 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:01.058 17:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:01.315 true 00:28:01.315 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:01.315 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.248 17:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.248 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:02.248 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:02.505 true 00:28:02.505 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:02.505 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.505 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.763 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:02.763 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:03.021 true 00:28:03.021 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:03.021 17:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.955 17:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.213 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.214 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:04.214 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:04.472 true 00:28:04.472 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:04.472 17:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.406 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.406 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:05.406 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:05.664 true 00:28:05.664 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:05.664 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.922 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.181 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:06.181 17:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:06.181 true 00:28:06.181 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:06.181 17:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.556 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:07.556 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:07.813 true 00:28:07.813 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:07.813 17:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.746 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.005 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:09.005 17:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:09.005 true 00:28:09.005 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:09.005 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.262 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.520 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:09.520 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:09.520 true 00:28:09.777 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:09.777 17:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.710 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.968 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:10.968 17:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:11.226 true 00:28:11.226 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:11.226 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.155 17:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.155 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:12.155 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:12.412 true 00:28:12.412 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:12.412 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.669 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.669 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:12.669 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:12.927 true 00:28:12.927 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:12.927 17:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 17:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.298 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:14.298 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:14.556 true 00:28:14.556 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:14.556 17:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.488 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.488 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:15.488 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:15.746 true 00:28:15.746 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:15.746 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.746 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.004 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:16.004 17:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:16.262 true 00:28:16.262 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:16.262 17:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.636 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:17.636 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:17.894 true 00:28:17.894 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:17.894 17:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.829 17:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.829 17:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:18.829 17:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:19.087 true 00:28:19.087 17:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:19.087 17:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.345 17:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.345 17:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:19.345 17:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:19.603 true 00:28:19.603 17:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:19.603 17:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.537 17:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.795 17:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:20.795 17:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:21.053 true 00:28:21.053 17:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:21.053 17:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.309 17:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:21.309 17:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:21.309 17:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:21.566 true 00:28:21.566 17:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:21.566 17:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 17:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.940 17:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:22.940 17:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:23.197 true 00:28:23.197 17:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:23.197 17:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.131 17:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.131 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:24.131 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:24.389 true 00:28:24.389 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:24.389 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.648 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.648 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:24.648 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:24.906 true 00:28:24.906 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:24.906 17:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.099 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.099 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:26.099 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:26.358 true 00:28:26.358 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:26.358 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.617 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.881 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:26.881 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:26.881 true 00:28:26.881 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:26.881 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.327 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:28.328 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:28.585 true 00:28:28.585 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:28.585 17:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.520 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.520 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:29.520 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:29.779 true 00:28:29.779 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:29.779 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.037 Initializing NVMe Controllers 00:28:30.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.038 Controller IO queue size 128, less than required. 00:28:30.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.038 Controller IO queue size 128, less than required. 00:28:30.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:30.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:30.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:30.038 Initialization complete. Launching workers. 00:28:30.038 ======================================================== 00:28:30.038 Latency(us) 00:28:30.038 Device Information : IOPS MiB/s Average min max 00:28:30.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2024.54 0.99 43517.37 2558.76 1068301.78 00:28:30.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18322.70 8.95 6986.09 2065.60 301766.36 00:28:30.038 ======================================================== 00:28:30.038 Total : 20347.24 9.94 10620.94 2065.60 1068301.78 00:28:30.038 00:28:30.038 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.038 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:30.038 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:30.296 true 00:28:30.296 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2664657 00:28:30.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2664657) - No such process 00:28:30.296 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2664657 00:28:30.296 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.554 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:30.812 null0 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:30.812 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:30.813 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:31.071 null1 00:28:31.071 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.071 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.071 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:31.330 null2 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:31.330 null3 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.330 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:31.588 null4 00:28:31.588 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.588 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.588 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:31.847 null5 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:31.847 null6 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:31.847 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:32.107 null7 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2669994 2669996 2669997 2669999 2670001 2670003 2670005 2670007 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.107 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.366 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.625 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.884 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.143 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.402 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.661 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.920 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.921 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.180 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.439 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.439 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.440 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:34.699 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.959 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:34.960 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.219 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:35.219 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:35.219 17:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:35.219 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.478 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.479 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:35.739 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:36.000 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:36.000 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:36.000 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.259 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.260 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.260 rmmod nvme_tcp 00:28:36.260 rmmod nvme_fabrics 00:28:36.260 rmmod nvme_keyring 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2664391 ']' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2664391 ']' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664391' 00:28:36.519 killing process with pid 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2664391 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.519 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.055 00:28:39.055 real 0m47.261s 00:28:39.055 user 2m57.059s 00:28:39.055 sys 0m19.643s 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:39.055 ************************************ 00:28:39.055 END TEST nvmf_ns_hotplug_stress 00:28:39.055 ************************************ 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.055 ************************************ 00:28:39.055 START TEST nvmf_delete_subsystem 00:28:39.055 ************************************ 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:39.055 * Looking for test storage... 00:28:39.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.055 --rc genhtml_branch_coverage=1 00:28:39.055 --rc genhtml_function_coverage=1 00:28:39.055 --rc genhtml_legend=1 00:28:39.055 --rc geninfo_all_blocks=1 00:28:39.055 --rc geninfo_unexecuted_blocks=1 00:28:39.055 00:28:39.055 ' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.055 --rc genhtml_branch_coverage=1 00:28:39.055 --rc genhtml_function_coverage=1 00:28:39.055 --rc genhtml_legend=1 00:28:39.055 --rc geninfo_all_blocks=1 00:28:39.055 --rc geninfo_unexecuted_blocks=1 00:28:39.055 00:28:39.055 ' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.055 --rc genhtml_branch_coverage=1 00:28:39.055 --rc genhtml_function_coverage=1 00:28:39.055 --rc genhtml_legend=1 00:28:39.055 --rc geninfo_all_blocks=1 00:28:39.055 --rc geninfo_unexecuted_blocks=1 00:28:39.055 00:28:39.055 ' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.055 --rc genhtml_branch_coverage=1 00:28:39.055 --rc genhtml_function_coverage=1 00:28:39.055 --rc genhtml_legend=1 00:28:39.055 --rc geninfo_all_blocks=1 00:28:39.055 --rc geninfo_unexecuted_blocks=1 00:28:39.055 00:28:39.055 ' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.055 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.056 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:45.628 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:45.628 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:45.628 Found net devices under 0000:86:00.0: cvl_0_0 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:45.628 Found net devices under 0000:86:00.1: cvl_0_1 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.628 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:28:45.629 00:28:45.629 --- 10.0.0.2 ping statistics --- 00:28:45.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.629 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:28:45.629 00:28:45.629 --- 10.0.0.1 ping statistics --- 00:28:45.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.629 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2674346 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2674346 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2674346 ']' 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.629 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 [2024-11-20 17:23:02.837455] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.629 [2024-11-20 17:23:02.838404] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:28:45.629 [2024-11-20 17:23:02.838438] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.629 [2024-11-20 17:23:02.919389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:45.629 [2024-11-20 17:23:02.960123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.629 [2024-11-20 17:23:02.960158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.629 [2024-11-20 17:23:02.960165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.629 [2024-11-20 17:23:02.960172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.629 [2024-11-20 17:23:02.960177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.629 [2024-11-20 17:23:02.961384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.629 [2024-11-20 17:23:02.961387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.629 [2024-11-20 17:23:03.030259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.629 [2024-11-20 17:23:03.030840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:45.629 [2024-11-20 17:23:03.031027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 [2024-11-20 17:23:03.098131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.629 [2024-11-20 17:23:03.126486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.629 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.630 NULL1 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.630 Delay0 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2674391 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:45.630 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:45.630 [2024-11-20 17:23:03.237804] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:47.533 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.533 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.533 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 [2024-11-20 17:23:05.360482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f680 is same with the state(6) to be set 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Write completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.533 starting I/O failed: -6 00:28:47.533 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 starting I/O failed: -6 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 starting I/O failed: -6 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 [2024-11-20 17:23:05.364037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f240c000c40 is same with the state(6) to be set 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:47.534 Write completed with error (sct=0, sc=8) 00:28:47.534 Read completed with error (sct=0, sc=8) 00:28:48.471 [2024-11-20 17:23:06.332229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16709a0 is same with the state(6) to be set 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 [2024-11-20 17:23:06.364294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f4a0 is same with the state(6) to be set 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Write completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.471 [2024-11-20 17:23:06.364422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f860 is same with the state(6) to be set 00:28:48.471 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 [2024-11-20 17:23:06.364824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f240c00d020 is same with the state(6) to be set 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Read completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 Write completed with error (sct=0, sc=8) 00:28:48.472 [2024-11-20 17:23:06.366902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f240c00d7e0 is same with the state(6) to be set 00:28:48.472 Initializing NVMe Controllers 00:28:48.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.472 Controller IO queue size 128, less than required. 00:28:48.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:48.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:48.472 Initialization complete. Launching workers. 00:28:48.472 ======================================================== 00:28:48.472 Latency(us) 00:28:48.472 Device Information : IOPS MiB/s Average min max 00:28:48.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.17 0.09 879467.31 287.18 1006004.78 00:28:48.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.29 0.07 935820.39 229.18 1009736.72 00:28:48.472 ======================================================== 00:28:48.472 Total : 329.47 0.16 905515.71 229.18 1009736.72 00:28:48.472 00:28:48.472 [2024-11-20 17:23:06.367432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16709a0 (9): Bad file descriptor 00:28:48.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:48.472 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.472 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:48.472 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2674391 00:28:48.472 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2674391 00:28:49.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2674391) - No such process 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2674391 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2674391 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2674391 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:49.040 [2024-11-20 17:23:06.898453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2674952 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:49.040 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.040 [2024-11-20 17:23:06.983823] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:49.605 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.605 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:49.605 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.170 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.170 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:50.170 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.428 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.428 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:50.428 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:50.993 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.993 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:50.993 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:51.559 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:51.559 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:51.559 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:52.125 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:52.125 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:52.125 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:52.125 Initializing NVMe Controllers 00:28:52.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.125 Controller IO queue size 128, less than required. 00:28:52.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:52.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:52.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:52.125 Initialization complete. Launching workers. 00:28:52.125 ======================================================== 00:28:52.125 Latency(us) 00:28:52.125 Device Information : IOPS MiB/s Average min max 00:28:52.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002052.22 1000126.00 1006615.65 00:28:52.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004019.20 1000191.24 1040875.97 00:28:52.125 ======================================================== 00:28:52.125 Total : 256.00 0.12 1003035.71 1000126.00 1040875.97 00:28:52.125 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2674952 00:28:52.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2674952) - No such process 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2674952 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.691 rmmod nvme_tcp 00:28:52.691 rmmod nvme_fabrics 00:28:52.691 rmmod nvme_keyring 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2674346 ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2674346 ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2674346' 00:28:52.691 killing process with pid 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2674346 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.691 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.952 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.952 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.952 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.952 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.952 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:54.863 00:28:54.863 real 0m16.123s 00:28:54.863 user 0m25.948s 00:28:54.863 sys 0m6.177s 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:54.863 ************************************ 00:28:54.863 END TEST nvmf_delete_subsystem 00:28:54.863 ************************************ 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.863 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:54.863 ************************************ 00:28:54.863 START TEST nvmf_host_management 00:28:54.863 ************************************ 00:28:54.864 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:55.125 * Looking for test storage... 00:28:55.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:55.126 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:55.126 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:55.126 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:55.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.126 --rc genhtml_branch_coverage=1 00:28:55.126 --rc genhtml_function_coverage=1 00:28:55.126 --rc genhtml_legend=1 00:28:55.126 --rc geninfo_all_blocks=1 00:28:55.126 --rc geninfo_unexecuted_blocks=1 00:28:55.126 00:28:55.126 ' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:55.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.126 --rc genhtml_branch_coverage=1 00:28:55.126 --rc genhtml_function_coverage=1 00:28:55.126 --rc genhtml_legend=1 00:28:55.126 --rc geninfo_all_blocks=1 00:28:55.126 --rc geninfo_unexecuted_blocks=1 00:28:55.126 00:28:55.126 ' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:55.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.126 --rc genhtml_branch_coverage=1 00:28:55.126 --rc genhtml_function_coverage=1 00:28:55.126 --rc genhtml_legend=1 00:28:55.126 --rc geninfo_all_blocks=1 00:28:55.126 --rc geninfo_unexecuted_blocks=1 00:28:55.126 00:28:55.126 ' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:55.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.126 --rc genhtml_branch_coverage=1 00:28:55.126 --rc genhtml_function_coverage=1 00:28:55.126 --rc genhtml_legend=1 00:28:55.126 --rc geninfo_all_blocks=1 00:28:55.126 --rc geninfo_unexecuted_blocks=1 00:28:55.126 00:28:55.126 ' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.126 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.127 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:01.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.696 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:01.697 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:01.697 Found net devices under 0000:86:00.0: cvl_0_0 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:01.697 Found net devices under 0000:86:00.1: cvl_0_1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:29:01.697 00:29:01.697 --- 10.0.0.2 ping statistics --- 00:29:01.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.697 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:01.697 00:29:01.697 --- 10.0.0.1 ping statistics --- 00:29:01.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.697 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.697 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2679065 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2679065 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2679065 ']' 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.698 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 [2024-11-20 17:23:19.015458] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.698 [2024-11-20 17:23:19.016339] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:01.698 [2024-11-20 17:23:19.016372] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.698 [2024-11-20 17:23:19.079592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.698 [2024-11-20 17:23:19.123370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.698 [2024-11-20 17:23:19.123406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.698 [2024-11-20 17:23:19.123414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.698 [2024-11-20 17:23:19.123420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.698 [2024-11-20 17:23:19.123425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.698 [2024-11-20 17:23:19.125059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.698 [2024-11-20 17:23:19.125092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.698 [2024-11-20 17:23:19.125209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.698 [2024-11-20 17:23:19.125215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.698 [2024-11-20 17:23:19.194090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.698 [2024-11-20 17:23:19.194911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.698 [2024-11-20 17:23:19.195054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:01.698 [2024-11-20 17:23:19.195392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:01.698 [2024-11-20 17:23:19.195445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 [2024-11-20 17:23:19.269985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 Malloc0 00:29:01.698 [2024-11-20 17:23:19.362230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2679109 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2679109 /var/tmp/bdevperf.sock 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2679109 ']' 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.698 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.698 { 00:29:01.698 "params": { 00:29:01.698 "name": "Nvme$subsystem", 00:29:01.698 "trtype": "$TEST_TRANSPORT", 00:29:01.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.698 "adrfam": "ipv4", 00:29:01.698 "trsvcid": "$NVMF_PORT", 00:29:01.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.698 "hdgst": ${hdgst:-false}, 00:29:01.698 "ddgst": ${ddgst:-false} 00:29:01.698 }, 00:29:01.698 "method": "bdev_nvme_attach_controller" 00:29:01.698 } 00:29:01.699 EOF 00:29:01.699 )") 00:29:01.699 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:01.699 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:01.699 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:01.699 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.699 "params": { 00:29:01.699 "name": "Nvme0", 00:29:01.699 "trtype": "tcp", 00:29:01.699 "traddr": "10.0.0.2", 00:29:01.699 "adrfam": "ipv4", 00:29:01.699 "trsvcid": "4420", 00:29:01.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.699 "hdgst": false, 00:29:01.699 "ddgst": false 00:29:01.699 }, 00:29:01.699 "method": "bdev_nvme_attach_controller" 00:29:01.699 }' 00:29:01.699 [2024-11-20 17:23:19.458388] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:01.699 [2024-11-20 17:23:19.458432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679109 ] 00:29:01.699 [2024-11-20 17:23:19.516720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.699 [2024-11-20 17:23:19.557698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.958 Running I/O for 10 seconds... 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:01.958 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:02.217 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.477 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=670 00:29:02.477 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 670 -ge 100 ']' 00:29:02.477 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:02.477 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:02.478 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:02.478 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:02.478 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.478 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:02.478 [2024-11-20 17:23:20.282036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.478 [2024-11-20 17:23:20.282637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.478 [2024-11-20 17:23:20.282644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.282988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.282994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.283002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.283008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.283016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.283022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.283030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.479 [2024-11-20 17:23:20.283036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.284000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:02.479 task offset: 101504 on job bdev=Nvme0n1 fails 00:29:02.479 00:29:02.479 Latency(us) 00:29:02.479 [2024-11-20T16:23:20.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.479 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.479 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:02.479 Verification LBA range: start 0x0 length 0x400 00:29:02.479 Nvme0n1 : 0.41 1892.71 118.29 157.73 0.00 30385.49 1505.77 26838.55 00:29:02.479 [2024-11-20T16:23:20.522Z] =================================================================================================================== 00:29:02.479 [2024-11-20T16:23:20.522Z] Total : 1892.71 118.29 157.73 0.00 30385.49 1505.77 26838.55 00:29:02.479 [2024-11-20 17:23:20.286378] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:02.479 [2024-11-20 17:23:20.286400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d8500 (9): Bad file descriptor 00:29:02.479 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.479 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:02.479 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.479 [2024-11-20 17:23:20.287522] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:02.479 [2024-11-20 17:23:20.287595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:02.479 [2024-11-20 17:23:20.287617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.479 [2024-11-20 17:23:20.287634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:02.479 [2024-11-20 17:23:20.287641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:02.479 [2024-11-20 17:23:20.287648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.479 [2024-11-20 17:23:20.287655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16d8500 00:29:02.479 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:02.479 [2024-11-20 17:23:20.287674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d8500 (9): Bad file descriptor 00:29:02.479 [2024-11-20 17:23:20.287686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:02.479 [2024-11-20 17:23:20.287693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:02.479 [2024-11-20 17:23:20.287701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:02.479 [2024-11-20 17:23:20.287709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:02.479 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.480 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2679109 00:29:03.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2679109) - No such process 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:03.416 { 00:29:03.416 "params": { 00:29:03.416 "name": "Nvme$subsystem", 00:29:03.416 "trtype": "$TEST_TRANSPORT", 00:29:03.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.416 "adrfam": "ipv4", 00:29:03.416 "trsvcid": "$NVMF_PORT", 00:29:03.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.416 "hdgst": ${hdgst:-false}, 00:29:03.416 "ddgst": ${ddgst:-false} 00:29:03.416 }, 00:29:03.416 "method": "bdev_nvme_attach_controller" 00:29:03.416 } 00:29:03.416 EOF 00:29:03.416 )") 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:03.416 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:03.416 "params": { 00:29:03.416 "name": "Nvme0", 00:29:03.416 "trtype": "tcp", 00:29:03.416 "traddr": "10.0.0.2", 00:29:03.416 "adrfam": "ipv4", 00:29:03.416 "trsvcid": "4420", 00:29:03.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:03.416 "hdgst": false, 00:29:03.416 "ddgst": false 00:29:03.416 }, 00:29:03.416 "method": "bdev_nvme_attach_controller" 00:29:03.416 }' 00:29:03.416 [2024-11-20 17:23:21.352844] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:03.416 [2024-11-20 17:23:21.352898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679389 ] 00:29:03.416 [2024-11-20 17:23:21.430035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.675 [2024-11-20 17:23:21.470386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.934 Running I/O for 1 seconds... 00:29:04.872 2048.00 IOPS, 128.00 MiB/s 00:29:04.872 Latency(us) 00:29:04.872 [2024-11-20T16:23:22.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.872 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:04.872 Verification LBA range: start 0x0 length 0x400 00:29:04.872 Nvme0n1 : 1.02 2075.57 129.72 0.00 0.00 30353.44 6647.22 26588.89 00:29:04.872 [2024-11-20T16:23:22.915Z] =================================================================================================================== 00:29:04.872 [2024-11-20T16:23:22.915Z] Total : 2075.57 129.72 0.00 0.00 30353.44 6647.22 26588.89 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.131 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.131 rmmod nvme_tcp 00:29:05.131 rmmod nvme_fabrics 00:29:05.131 rmmod nvme_keyring 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2679065 ']' 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2679065 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2679065 ']' 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2679065 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2679065 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2679065' 00:29:05.131 killing process with pid 2679065 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2679065 00:29:05.131 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2679065 00:29:05.390 [2024-11-20 17:23:23.255262] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.390 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:07.923 00:29:07.923 real 0m12.485s 00:29:07.923 user 0m18.715s 00:29:07.923 sys 0m6.322s 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:07.923 ************************************ 00:29:07.923 END TEST nvmf_host_management 00:29:07.923 ************************************ 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:07.923 ************************************ 00:29:07.923 START TEST nvmf_lvol 00:29:07.923 ************************************ 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:07.923 * Looking for test storage... 00:29:07.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.923 --rc genhtml_branch_coverage=1 00:29:07.923 --rc genhtml_function_coverage=1 00:29:07.923 --rc genhtml_legend=1 00:29:07.923 --rc geninfo_all_blocks=1 00:29:07.923 --rc geninfo_unexecuted_blocks=1 00:29:07.923 00:29:07.923 ' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.923 --rc genhtml_branch_coverage=1 00:29:07.923 --rc genhtml_function_coverage=1 00:29:07.923 --rc genhtml_legend=1 00:29:07.923 --rc geninfo_all_blocks=1 00:29:07.923 --rc geninfo_unexecuted_blocks=1 00:29:07.923 00:29:07.923 ' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.923 --rc genhtml_branch_coverage=1 00:29:07.923 --rc genhtml_function_coverage=1 00:29:07.923 --rc genhtml_legend=1 00:29:07.923 --rc geninfo_all_blocks=1 00:29:07.923 --rc geninfo_unexecuted_blocks=1 00:29:07.923 00:29:07.923 ' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.923 --rc genhtml_branch_coverage=1 00:29:07.923 --rc genhtml_function_coverage=1 00:29:07.923 --rc genhtml_legend=1 00:29:07.923 --rc geninfo_all_blocks=1 00:29:07.923 --rc geninfo_unexecuted_blocks=1 00:29:07.923 00:29:07.923 ' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.923 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.924 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.197 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:13.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.457 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:13.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:13.458 Found net devices under 0000:86:00.0: cvl_0_0 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:13.458 Found net devices under 0000:86:00.1: cvl_0_1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:29:13.458 00:29:13.458 --- 10.0.0.2 ping statistics --- 00:29:13.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.458 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:29:13.458 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:29:13.718 00:29:13.718 --- 10.0.0.1 ping statistics --- 00:29:13.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.718 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2683172 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2683172 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2683172 ']' 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.718 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:13.718 [2024-11-20 17:23:31.597442] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:13.718 [2024-11-20 17:23:31.598393] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:13.718 [2024-11-20 17:23:31.598430] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.718 [2024-11-20 17:23:31.678503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:13.718 [2024-11-20 17:23:31.720128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.718 [2024-11-20 17:23:31.720164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.718 [2024-11-20 17:23:31.720171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.718 [2024-11-20 17:23:31.720177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.718 [2024-11-20 17:23:31.720182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.718 [2024-11-20 17:23:31.721453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.718 [2024-11-20 17:23:31.721561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.718 [2024-11-20 17:23:31.721562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.976 [2024-11-20 17:23:31.788889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:13.976 [2024-11-20 17:23:31.789728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:13.976 [2024-11-20 17:23:31.789802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:13.976 [2024-11-20 17:23:31.789990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.976 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:14.234 [2024-11-20 17:23:32.034333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.234 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:14.493 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:14.493 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:14.493 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:14.493 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:14.751 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:15.010 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3328a0b5-d872-4b6e-a0af-227c5d782c37 00:29:15.010 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3328a0b5-d872-4b6e-a0af-227c5d782c37 lvol 20 00:29:15.269 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ac80b86d-9f6a-4912-b038-b6245b0783b2 00:29:15.269 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:15.269 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac80b86d-9f6a-4912-b038-b6245b0783b2 00:29:15.529 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:15.789 [2024-11-20 17:23:33.634193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.789 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:16.048 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2683616 00:29:16.048 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:16.048 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:17.012 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ac80b86d-9f6a-4912-b038-b6245b0783b2 MY_SNAPSHOT 00:29:17.366 17:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=58da6642-dd4a-49f2-96be-027cbd6b306e 00:29:17.366 17:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ac80b86d-9f6a-4912-b038-b6245b0783b2 30 00:29:17.366 17:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 58da6642-dd4a-49f2-96be-027cbd6b306e MY_CLONE 00:29:17.625 17:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b61d45f3-9b25-4730-88f6-786cefc4f0d4 00:29:17.625 17:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b61d45f3-9b25-4730-88f6-786cefc4f0d4 00:29:18.192 17:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2683616 00:29:26.313 Initializing NVMe Controllers 00:29:26.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:26.313 Controller IO queue size 128, less than required. 00:29:26.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:26.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:26.313 Initialization complete. Launching workers. 00:29:26.313 ======================================================== 00:29:26.313 Latency(us) 00:29:26.314 Device Information : IOPS MiB/s Average min max 00:29:26.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12071.40 47.15 10607.52 1570.81 69511.80 00:29:26.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12211.50 47.70 10487.54 187.03 56222.35 00:29:26.314 ======================================================== 00:29:26.314 Total : 24282.90 94.86 10547.18 187.03 69511.80 00:29:26.314 00:29:26.314 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:26.573 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac80b86d-9f6a-4912-b038-b6245b0783b2 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3328a0b5-d872-4b6e-a0af-227c5d782c37 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.833 rmmod nvme_tcp 00:29:26.833 rmmod nvme_fabrics 00:29:26.833 rmmod nvme_keyring 00:29:26.833 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2683172 ']' 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2683172 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2683172 ']' 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2683172 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683172 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683172' 00:29:27.093 killing process with pid 2683172 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2683172 00:29:27.093 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2683172 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.353 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.262 00:29:29.262 real 0m21.784s 00:29:29.262 user 0m55.308s 00:29:29.262 sys 0m9.898s 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.262 ************************************ 00:29:29.262 END TEST nvmf_lvol 00:29:29.262 ************************************ 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.262 ************************************ 00:29:29.262 START TEST nvmf_lvs_grow 00:29:29.262 ************************************ 00:29:29.262 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:29.522 * Looking for test storage... 00:29:29.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.522 --rc genhtml_branch_coverage=1 00:29:29.522 --rc genhtml_function_coverage=1 00:29:29.522 --rc genhtml_legend=1 00:29:29.522 --rc geninfo_all_blocks=1 00:29:29.522 --rc geninfo_unexecuted_blocks=1 00:29:29.522 00:29:29.522 ' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.522 --rc genhtml_branch_coverage=1 00:29:29.522 --rc genhtml_function_coverage=1 00:29:29.522 --rc genhtml_legend=1 00:29:29.522 --rc geninfo_all_blocks=1 00:29:29.522 --rc geninfo_unexecuted_blocks=1 00:29:29.522 00:29:29.522 ' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.522 --rc genhtml_branch_coverage=1 00:29:29.522 --rc genhtml_function_coverage=1 00:29:29.522 --rc genhtml_legend=1 00:29:29.522 --rc geninfo_all_blocks=1 00:29:29.522 --rc geninfo_unexecuted_blocks=1 00:29:29.522 00:29:29.522 ' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.522 --rc genhtml_branch_coverage=1 00:29:29.522 --rc genhtml_function_coverage=1 00:29:29.522 --rc genhtml_legend=1 00:29:29.522 --rc geninfo_all_blocks=1 00:29:29.522 --rc geninfo_unexecuted_blocks=1 00:29:29.522 00:29:29.522 ' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.522 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.523 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.102 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.102 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.102 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.102 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:36.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:36.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:36.103 Found net devices under 0000:86:00.0: cvl_0_0 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:36.103 Found net devices under 0000:86:00.1: cvl_0_1 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.103 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:29:36.104 00:29:36.104 --- 10.0.0.2 ping statistics --- 00:29:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.104 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:29:36.104 00:29:36.104 --- 10.0.0.1 ping statistics --- 00:29:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.104 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2688910 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2688910 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2688910 ']' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.104 [2024-11-20 17:23:53.456536] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.104 [2024-11-20 17:23:53.457529] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:36.104 [2024-11-20 17:23:53.457567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.104 [2024-11-20 17:23:53.535259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.104 [2024-11-20 17:23:53.576768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.104 [2024-11-20 17:23:53.576802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.104 [2024-11-20 17:23:53.576809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.104 [2024-11-20 17:23:53.576815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.104 [2024-11-20 17:23:53.576820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.104 [2024-11-20 17:23:53.577374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.104 [2024-11-20 17:23:53.645393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:36.104 [2024-11-20 17:23:53.645598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:36.104 [2024-11-20 17:23:53.874011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:36.104 ************************************ 00:29:36.104 START TEST lvs_grow_clean 00:29:36.104 ************************************ 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:36.104 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:36.363 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:36.363 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:36.363 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:36.364 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:36.364 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:36.622 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:36.622 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:36.622 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 lvol 150 00:29:36.881 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cafb0668-5c4d-46d5-b5a1-86977be0b7ae 00:29:36.881 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:36.881 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:37.141 [2024-11-20 17:23:54.929753] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:37.141 [2024-11-20 17:23:54.929884] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:37.141 true 00:29:37.141 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:37.141 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:37.141 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:37.141 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:37.400 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cafb0668-5c4d-46d5-b5a1-86977be0b7ae 00:29:37.659 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.659 [2024-11-20 17:23:55.662272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.659 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2689259 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2689259 /var/tmp/bdevperf.sock 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2689259 ']' 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:37.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.918 17:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.918 [2024-11-20 17:23:55.934651] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:37.918 [2024-11-20 17:23:55.934696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689259 ] 00:29:38.177 [2024-11-20 17:23:56.010948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.177 [2024-11-20 17:23:56.053084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.177 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.177 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:38.177 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:38.436 Nvme0n1 00:29:38.436 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:38.695 [ 00:29:38.695 { 00:29:38.695 "name": "Nvme0n1", 00:29:38.695 "aliases": [ 00:29:38.695 "cafb0668-5c4d-46d5-b5a1-86977be0b7ae" 00:29:38.695 ], 00:29:38.695 "product_name": "NVMe disk", 00:29:38.695 "block_size": 4096, 00:29:38.695 "num_blocks": 38912, 00:29:38.695 "uuid": "cafb0668-5c4d-46d5-b5a1-86977be0b7ae", 00:29:38.695 "numa_id": 1, 00:29:38.695 "assigned_rate_limits": { 00:29:38.695 "rw_ios_per_sec": 0, 00:29:38.695 "rw_mbytes_per_sec": 0, 00:29:38.695 "r_mbytes_per_sec": 0, 00:29:38.695 "w_mbytes_per_sec": 0 00:29:38.695 }, 00:29:38.695 "claimed": false, 00:29:38.695 "zoned": false, 00:29:38.695 "supported_io_types": { 00:29:38.695 "read": true, 00:29:38.695 "write": true, 00:29:38.695 "unmap": true, 00:29:38.695 "flush": true, 00:29:38.695 "reset": true, 00:29:38.695 "nvme_admin": true, 00:29:38.695 "nvme_io": true, 00:29:38.695 "nvme_io_md": false, 00:29:38.695 "write_zeroes": true, 00:29:38.695 "zcopy": false, 00:29:38.695 "get_zone_info": false, 00:29:38.695 "zone_management": false, 00:29:38.695 "zone_append": false, 00:29:38.695 "compare": true, 00:29:38.695 "compare_and_write": true, 00:29:38.695 "abort": true, 00:29:38.695 "seek_hole": false, 00:29:38.695 "seek_data": false, 00:29:38.695 "copy": true, 00:29:38.696 "nvme_iov_md": false 00:29:38.696 }, 00:29:38.696 "memory_domains": [ 00:29:38.696 { 00:29:38.696 "dma_device_id": "system", 00:29:38.696 "dma_device_type": 1 00:29:38.696 } 00:29:38.696 ], 00:29:38.696 "driver_specific": { 00:29:38.696 "nvme": [ 00:29:38.696 { 00:29:38.696 "trid": { 00:29:38.696 "trtype": "TCP", 00:29:38.696 "adrfam": "IPv4", 00:29:38.696 "traddr": "10.0.0.2", 00:29:38.696 "trsvcid": "4420", 00:29:38.696 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:38.696 }, 00:29:38.696 "ctrlr_data": { 00:29:38.696 "cntlid": 1, 00:29:38.696 "vendor_id": "0x8086", 00:29:38.696 "model_number": "SPDK bdev Controller", 00:29:38.696 "serial_number": "SPDK0", 00:29:38.696 "firmware_revision": "25.01", 00:29:38.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.696 "oacs": { 00:29:38.696 "security": 0, 00:29:38.696 "format": 0, 00:29:38.696 "firmware": 0, 00:29:38.696 "ns_manage": 0 00:29:38.696 }, 00:29:38.696 "multi_ctrlr": true, 00:29:38.696 "ana_reporting": false 00:29:38.696 }, 00:29:38.696 "vs": { 00:29:38.696 "nvme_version": "1.3" 00:29:38.696 }, 00:29:38.696 "ns_data": { 00:29:38.696 "id": 1, 00:29:38.696 "can_share": true 00:29:38.696 } 00:29:38.696 } 00:29:38.696 ], 00:29:38.696 "mp_policy": "active_passive" 00:29:38.696 } 00:29:38.696 } 00:29:38.696 ] 00:29:38.696 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2689484 00:29:38.696 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.696 17:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:38.696 Running I/O for 10 seconds... 00:29:39.632 Latency(us) 00:29:39.632 [2024-11-20T16:23:57.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.632 Nvme0n1 : 1.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:39.632 [2024-11-20T16:23:57.675Z] =================================================================================================================== 00:29:39.632 [2024-11-20T16:23:57.675Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:39.632 00:29:40.569 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:40.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.827 Nvme0n1 : 2.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:40.827 [2024-11-20T16:23:58.870Z] =================================================================================================================== 00:29:40.827 [2024-11-20T16:23:58.870Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:40.827 00:29:40.827 true 00:29:40.827 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:40.827 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:41.085 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:41.085 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:41.085 17:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2689484 00:29:41.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.653 Nvme0n1 : 3.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:41.653 [2024-11-20T16:23:59.696Z] =================================================================================================================== 00:29:41.653 [2024-11-20T16:23:59.696Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:41.653 00:29:43.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.030 Nvme0n1 : 4.00 22891.75 89.42 0.00 0.00 0.00 0.00 0.00 00:29:43.030 [2024-11-20T16:24:01.073Z] =================================================================================================================== 00:29:43.030 [2024-11-20T16:24:01.073Z] Total : 22891.75 89.42 0.00 0.00 0.00 0.00 0.00 00:29:43.030 00:29:43.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.966 Nvme0n1 : 5.00 22910.80 89.50 0.00 0.00 0.00 0.00 0.00 00:29:43.966 [2024-11-20T16:24:02.009Z] =================================================================================================================== 00:29:43.966 [2024-11-20T16:24:02.009Z] Total : 22910.80 89.50 0.00 0.00 0.00 0.00 0.00 00:29:43.966 00:29:44.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.903 Nvme0n1 : 6.00 22944.67 89.63 0.00 0.00 0.00 0.00 0.00 00:29:44.903 [2024-11-20T16:24:02.946Z] =================================================================================================================== 00:29:44.903 [2024-11-20T16:24:02.946Z] Total : 22944.67 89.63 0.00 0.00 0.00 0.00 0.00 00:29:44.903 00:29:45.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.839 Nvme0n1 : 7.00 22955.57 89.67 0.00 0.00 0.00 0.00 0.00 00:29:45.839 [2024-11-20T16:24:03.882Z] =================================================================================================================== 00:29:45.839 [2024-11-20T16:24:03.882Z] Total : 22955.57 89.67 0.00 0.00 0.00 0.00 0.00 00:29:45.839 00:29:46.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.776 Nvme0n1 : 8.00 22963.75 89.70 0.00 0.00 0.00 0.00 0.00 00:29:46.776 [2024-11-20T16:24:04.819Z] =================================================================================================================== 00:29:46.776 [2024-11-20T16:24:04.819Z] Total : 22963.75 89.70 0.00 0.00 0.00 0.00 0.00 00:29:46.776 00:29:47.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.713 Nvme0n1 : 9.00 22970.11 89.73 0.00 0.00 0.00 0.00 0.00 00:29:47.713 [2024-11-20T16:24:05.756Z] =================================================================================================================== 00:29:47.713 [2024-11-20T16:24:05.756Z] Total : 22970.11 89.73 0.00 0.00 0.00 0.00 0.00 00:29:47.713 00:29:48.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.651 Nvme0n1 : 10.00 22997.20 89.83 0.00 0.00 0.00 0.00 0.00 00:29:48.651 [2024-11-20T16:24:06.694Z] =================================================================================================================== 00:29:48.651 [2024-11-20T16:24:06.694Z] Total : 22997.20 89.83 0.00 0.00 0.00 0.00 0.00 00:29:48.651 00:29:48.651 00:29:48.651 Latency(us) 00:29:48.651 [2024-11-20T16:24:06.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.651 Nvme0n1 : 10.00 22989.65 89.80 0.00 0.00 5564.08 3276.80 25590.25 00:29:48.651 [2024-11-20T16:24:06.694Z] =================================================================================================================== 00:29:48.651 [2024-11-20T16:24:06.694Z] Total : 22989.65 89.80 0.00 0.00 5564.08 3276.80 25590.25 00:29:48.651 { 00:29:48.651 "results": [ 00:29:48.651 { 00:29:48.651 "job": "Nvme0n1", 00:29:48.651 "core_mask": "0x2", 00:29:48.651 "workload": "randwrite", 00:29:48.651 "status": "finished", 00:29:48.651 "queue_depth": 128, 00:29:48.651 "io_size": 4096, 00:29:48.651 "runtime": 10.003327, 00:29:48.651 "iops": 22989.651342998186, 00:29:48.651 "mibps": 89.80332555858666, 00:29:48.651 "io_failed": 0, 00:29:48.652 "io_timeout": 0, 00:29:48.652 "avg_latency_us": 5564.0784003588005, 00:29:48.652 "min_latency_us": 3276.8, 00:29:48.652 "max_latency_us": 25590.24761904762 00:29:48.652 } 00:29:48.652 ], 00:29:48.652 "core_count": 1 00:29:48.652 } 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2689259 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2689259 ']' 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2689259 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2689259 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2689259' 00:29:48.911 killing process with pid 2689259 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2689259 00:29:48.911 Received shutdown signal, test time was about 10.000000 seconds 00:29:48.911 00:29:48.911 Latency(us) 00:29:48.911 [2024-11-20T16:24:06.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.911 [2024-11-20T16:24:06.954Z] =================================================================================================================== 00:29:48.911 [2024-11-20T16:24:06.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2689259 00:29:48.911 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.170 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:49.430 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:49.430 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:49.689 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:49.689 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:49.689 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:49.689 [2024-11-20 17:24:07.681799] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:49.949 request: 00:29:49.949 { 00:29:49.949 "uuid": "552fe5e8-94b1-43e9-8ac7-2f046f19b1c3", 00:29:49.949 "method": "bdev_lvol_get_lvstores", 00:29:49.949 "req_id": 1 00:29:49.949 } 00:29:49.949 Got JSON-RPC error response 00:29:49.949 response: 00:29:49.949 { 00:29:49.949 "code": -19, 00:29:49.949 "message": "No such device" 00:29:49.949 } 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:49.949 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:50.208 aio_bdev 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cafb0668-5c4d-46d5-b5a1-86977be0b7ae 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cafb0668-5c4d-46d5-b5a1-86977be0b7ae 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:50.208 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:50.467 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cafb0668-5c4d-46d5-b5a1-86977be0b7ae -t 2000 00:29:50.467 [ 00:29:50.467 { 00:29:50.467 "name": "cafb0668-5c4d-46d5-b5a1-86977be0b7ae", 00:29:50.467 "aliases": [ 00:29:50.467 "lvs/lvol" 00:29:50.467 ], 00:29:50.468 "product_name": "Logical Volume", 00:29:50.468 "block_size": 4096, 00:29:50.468 "num_blocks": 38912, 00:29:50.468 "uuid": "cafb0668-5c4d-46d5-b5a1-86977be0b7ae", 00:29:50.468 "assigned_rate_limits": { 00:29:50.468 "rw_ios_per_sec": 0, 00:29:50.468 "rw_mbytes_per_sec": 0, 00:29:50.468 "r_mbytes_per_sec": 0, 00:29:50.468 "w_mbytes_per_sec": 0 00:29:50.468 }, 00:29:50.468 "claimed": false, 00:29:50.468 "zoned": false, 00:29:50.468 "supported_io_types": { 00:29:50.468 "read": true, 00:29:50.468 "write": true, 00:29:50.468 "unmap": true, 00:29:50.468 "flush": false, 00:29:50.468 "reset": true, 00:29:50.468 "nvme_admin": false, 00:29:50.468 "nvme_io": false, 00:29:50.468 "nvme_io_md": false, 00:29:50.468 "write_zeroes": true, 00:29:50.468 "zcopy": false, 00:29:50.468 "get_zone_info": false, 00:29:50.468 "zone_management": false, 00:29:50.468 "zone_append": false, 00:29:50.468 "compare": false, 00:29:50.468 "compare_and_write": false, 00:29:50.468 "abort": false, 00:29:50.468 "seek_hole": true, 00:29:50.468 "seek_data": true, 00:29:50.468 "copy": false, 00:29:50.468 "nvme_iov_md": false 00:29:50.468 }, 00:29:50.468 "driver_specific": { 00:29:50.468 "lvol": { 00:29:50.468 "lvol_store_uuid": "552fe5e8-94b1-43e9-8ac7-2f046f19b1c3", 00:29:50.468 "base_bdev": "aio_bdev", 00:29:50.468 "thin_provision": false, 00:29:50.468 "num_allocated_clusters": 38, 00:29:50.468 "snapshot": false, 00:29:50.468 "clone": false, 00:29:50.468 "esnap_clone": false 00:29:50.468 } 00:29:50.468 } 00:29:50.468 } 00:29:50.468 ] 00:29:50.468 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:50.468 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:50.468 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:50.727 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:50.727 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:50.727 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:50.986 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:50.986 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cafb0668-5c4d-46d5-b5a1-86977be0b7ae 00:29:51.245 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 552fe5e8-94b1-43e9-8ac7-2f046f19b1c3 00:29:51.502 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:51.502 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:51.502 00:29:51.502 real 0m15.585s 00:29:51.502 user 0m15.055s 00:29:51.502 sys 0m1.466s 00:29:51.503 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.503 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.503 ************************************ 00:29:51.503 END TEST lvs_grow_clean 00:29:51.503 ************************************ 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:51.761 ************************************ 00:29:51.761 START TEST lvs_grow_dirty 00:29:51.761 ************************************ 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:51.761 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:52.020 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:52.020 17:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:52.020 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:29:52.020 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:29:52.020 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:52.279 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:52.279 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:52.279 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da5e8ca8-d8a2-4681-a362-b25ed09f699c lvol 150 00:29:52.537 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:29:52.538 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:52.538 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:52.797 [2024-11-20 17:24:10.585761] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:52.797 [2024-11-20 17:24:10.585887] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:52.797 true 00:29:52.797 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:29:52.797 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:52.797 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:52.797 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:53.056 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:29:53.315 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:53.574 [2024-11-20 17:24:11.370181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2692350 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2692350 /var/tmp/bdevperf.sock 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2692350 ']' 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.574 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:53.833 [2024-11-20 17:24:11.629127] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:29:53.833 [2024-11-20 17:24:11.629177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692350 ] 00:29:53.833 [2024-11-20 17:24:11.703283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.833 [2024-11-20 17:24:11.744863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.833 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.833 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:53.833 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:54.399 Nvme0n1 00:29:54.399 17:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:54.399 [ 00:29:54.399 { 00:29:54.399 "name": "Nvme0n1", 00:29:54.399 "aliases": [ 00:29:54.399 "cba18497-1d1f-47eb-9a9e-da346d42cd5d" 00:29:54.399 ], 00:29:54.399 "product_name": "NVMe disk", 00:29:54.399 "block_size": 4096, 00:29:54.399 "num_blocks": 38912, 00:29:54.399 "uuid": "cba18497-1d1f-47eb-9a9e-da346d42cd5d", 00:29:54.399 "numa_id": 1, 00:29:54.399 "assigned_rate_limits": { 00:29:54.399 "rw_ios_per_sec": 0, 00:29:54.399 "rw_mbytes_per_sec": 0, 00:29:54.399 "r_mbytes_per_sec": 0, 00:29:54.399 "w_mbytes_per_sec": 0 00:29:54.399 }, 00:29:54.399 "claimed": false, 00:29:54.399 "zoned": false, 00:29:54.399 "supported_io_types": { 00:29:54.399 "read": true, 00:29:54.399 "write": true, 00:29:54.399 "unmap": true, 00:29:54.399 "flush": true, 00:29:54.399 "reset": true, 00:29:54.399 "nvme_admin": true, 00:29:54.399 "nvme_io": true, 00:29:54.399 "nvme_io_md": false, 00:29:54.399 "write_zeroes": true, 00:29:54.399 "zcopy": false, 00:29:54.399 "get_zone_info": false, 00:29:54.399 "zone_management": false, 00:29:54.399 "zone_append": false, 00:29:54.400 "compare": true, 00:29:54.400 "compare_and_write": true, 00:29:54.400 "abort": true, 00:29:54.400 "seek_hole": false, 00:29:54.400 "seek_data": false, 00:29:54.400 "copy": true, 00:29:54.400 "nvme_iov_md": false 00:29:54.400 }, 00:29:54.400 "memory_domains": [ 00:29:54.400 { 00:29:54.400 "dma_device_id": "system", 00:29:54.400 "dma_device_type": 1 00:29:54.400 } 00:29:54.400 ], 00:29:54.400 "driver_specific": { 00:29:54.400 "nvme": [ 00:29:54.400 { 00:29:54.400 "trid": { 00:29:54.400 "trtype": "TCP", 00:29:54.400 "adrfam": "IPv4", 00:29:54.400 "traddr": "10.0.0.2", 00:29:54.400 "trsvcid": "4420", 00:29:54.400 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:54.400 }, 00:29:54.400 "ctrlr_data": { 00:29:54.400 "cntlid": 1, 00:29:54.400 "vendor_id": "0x8086", 00:29:54.400 "model_number": "SPDK bdev Controller", 00:29:54.400 "serial_number": "SPDK0", 00:29:54.400 "firmware_revision": "25.01", 00:29:54.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.400 "oacs": { 00:29:54.400 "security": 0, 00:29:54.400 "format": 0, 00:29:54.400 "firmware": 0, 00:29:54.400 "ns_manage": 0 00:29:54.400 }, 00:29:54.400 "multi_ctrlr": true, 00:29:54.400 "ana_reporting": false 00:29:54.400 }, 00:29:54.400 "vs": { 00:29:54.400 "nvme_version": "1.3" 00:29:54.400 }, 00:29:54.400 "ns_data": { 00:29:54.400 "id": 1, 00:29:54.400 "can_share": true 00:29:54.400 } 00:29:54.400 } 00:29:54.400 ], 00:29:54.400 "mp_policy": "active_passive" 00:29:54.400 } 00:29:54.400 } 00:29:54.400 ] 00:29:54.400 17:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2692575 00:29:54.400 17:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:54.400 17:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:54.659 Running I/O for 10 seconds... 00:29:55.597 Latency(us) 00:29:55.597 [2024-11-20T16:24:13.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.597 Nvme0n1 : 1.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:55.597 [2024-11-20T16:24:13.640Z] =================================================================================================================== 00:29:55.597 [2024-11-20T16:24:13.640Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:55.597 00:29:56.532 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:29:56.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.532 Nvme0n1 : 2.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:56.532 [2024-11-20T16:24:14.575Z] =================================================================================================================== 00:29:56.532 [2024-11-20T16:24:14.575Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:29:56.532 00:29:56.790 true 00:29:56.790 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:29:56.790 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:56.790 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:56.790 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:56.790 17:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2692575 00:29:57.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.727 Nvme0n1 : 3.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:57.727 [2024-11-20T16:24:15.770Z] =================================================================================================================== 00:29:57.727 [2024-11-20T16:24:15.770Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:29:57.727 00:29:58.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.662 Nvme0n1 : 4.00 23082.25 90.17 0.00 0.00 0.00 0.00 0.00 00:29:58.662 [2024-11-20T16:24:16.705Z] =================================================================================================================== 00:29:58.662 [2024-11-20T16:24:16.705Z] Total : 23082.25 90.17 0.00 0.00 0.00 0.00 0.00 00:29:58.662 00:29:59.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.599 Nvme0n1 : 5.00 23164.80 90.49 0.00 0.00 0.00 0.00 0.00 00:29:59.599 [2024-11-20T16:24:17.642Z] =================================================================================================================== 00:29:59.599 [2024-11-20T16:24:17.642Z] Total : 23164.80 90.49 0.00 0.00 0.00 0.00 0.00 00:29:59.599 00:30:00.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.535 Nvme0n1 : 6.00 23219.83 90.70 0.00 0.00 0.00 0.00 0.00 00:30:00.535 [2024-11-20T16:24:18.578Z] =================================================================================================================== 00:30:00.535 [2024-11-20T16:24:18.578Z] Total : 23219.83 90.70 0.00 0.00 0.00 0.00 0.00 00:30:00.535 00:30:01.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.913 Nvme0n1 : 7.00 23250.14 90.82 0.00 0.00 0.00 0.00 0.00 00:30:01.913 [2024-11-20T16:24:19.956Z] =================================================================================================================== 00:30:01.913 [2024-11-20T16:24:19.956Z] Total : 23250.14 90.82 0.00 0.00 0.00 0.00 0.00 00:30:01.913 00:30:02.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.847 Nvme0n1 : 8.00 23249.00 90.82 0.00 0.00 0.00 0.00 0.00 00:30:02.847 [2024-11-20T16:24:20.890Z] =================================================================================================================== 00:30:02.847 [2024-11-20T16:24:20.890Z] Total : 23249.00 90.82 0.00 0.00 0.00 0.00 0.00 00:30:02.847 00:30:03.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.782 Nvme0n1 : 9.00 23262.22 90.87 0.00 0.00 0.00 0.00 0.00 00:30:03.782 [2024-11-20T16:24:21.825Z] =================================================================================================================== 00:30:03.782 [2024-11-20T16:24:21.825Z] Total : 23262.22 90.87 0.00 0.00 0.00 0.00 0.00 00:30:03.782 00:30:04.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.719 Nvme0n1 : 10.00 23285.50 90.96 0.00 0.00 0.00 0.00 0.00 00:30:04.719 [2024-11-20T16:24:22.762Z] =================================================================================================================== 00:30:04.719 [2024-11-20T16:24:22.762Z] Total : 23285.50 90.96 0.00 0.00 0.00 0.00 0.00 00:30:04.719 00:30:04.719 00:30:04.719 Latency(us) 00:30:04.719 [2024-11-20T16:24:22.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.719 Nvme0n1 : 10.00 23288.62 90.97 0.00 0.00 5493.36 3432.84 27462.70 00:30:04.719 [2024-11-20T16:24:22.762Z] =================================================================================================================== 00:30:04.719 [2024-11-20T16:24:22.762Z] Total : 23288.62 90.97 0.00 0.00 5493.36 3432.84 27462.70 00:30:04.719 { 00:30:04.719 "results": [ 00:30:04.719 { 00:30:04.719 "job": "Nvme0n1", 00:30:04.719 "core_mask": "0x2", 00:30:04.719 "workload": "randwrite", 00:30:04.719 "status": "finished", 00:30:04.719 "queue_depth": 128, 00:30:04.719 "io_size": 4096, 00:30:04.719 "runtime": 10.004156, 00:30:04.719 "iops": 23288.621249008913, 00:30:04.719 "mibps": 90.97117675394107, 00:30:04.719 "io_failed": 0, 00:30:04.719 "io_timeout": 0, 00:30:04.719 "avg_latency_us": 5493.360011233193, 00:30:04.719 "min_latency_us": 3432.8380952380953, 00:30:04.719 "max_latency_us": 27462.704761904763 00:30:04.719 } 00:30:04.719 ], 00:30:04.719 "core_count": 1 00:30:04.719 } 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2692350 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2692350 ']' 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2692350 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2692350 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2692350' 00:30:04.719 killing process with pid 2692350 00:30:04.719 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2692350 00:30:04.719 Received shutdown signal, test time was about 10.000000 seconds 00:30:04.719 00:30:04.719 Latency(us) 00:30:04.719 [2024-11-20T16:24:22.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.719 [2024-11-20T16:24:22.762Z] =================================================================================================================== 00:30:04.720 [2024-11-20T16:24:22.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.720 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2692350 00:30:04.977 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.977 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.236 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:05.236 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2688910 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2688910 00:30:05.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2688910 Killed "${NVMF_APP[@]}" "$@" 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2694237 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2694237 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2694237 ']' 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.495 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:05.495 [2024-11-20 17:24:23.451434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.495 [2024-11-20 17:24:23.452310] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:05.495 [2024-11-20 17:24:23.452343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.495 [2024-11-20 17:24:23.531417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.755 [2024-11-20 17:24:23.571512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.755 [2024-11-20 17:24:23.571547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.755 [2024-11-20 17:24:23.571553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.755 [2024-11-20 17:24:23.571559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.755 [2024-11-20 17:24:23.571564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.755 [2024-11-20 17:24:23.572107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.755 [2024-11-20 17:24:23.639320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.755 [2024-11-20 17:24:23.639546] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.755 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:06.014 [2024-11-20 17:24:23.885545] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:06.014 [2024-11-20 17:24:23.885750] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:06.014 [2024-11-20 17:24:23.885834] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:06.014 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:06.274 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cba18497-1d1f-47eb-9a9e-da346d42cd5d -t 2000 00:30:06.274 [ 00:30:06.274 { 00:30:06.274 "name": "cba18497-1d1f-47eb-9a9e-da346d42cd5d", 00:30:06.274 "aliases": [ 00:30:06.274 "lvs/lvol" 00:30:06.274 ], 00:30:06.274 "product_name": "Logical Volume", 00:30:06.274 "block_size": 4096, 00:30:06.274 "num_blocks": 38912, 00:30:06.274 "uuid": "cba18497-1d1f-47eb-9a9e-da346d42cd5d", 00:30:06.274 "assigned_rate_limits": { 00:30:06.274 "rw_ios_per_sec": 0, 00:30:06.274 "rw_mbytes_per_sec": 0, 00:30:06.274 "r_mbytes_per_sec": 0, 00:30:06.274 "w_mbytes_per_sec": 0 00:30:06.274 }, 00:30:06.274 "claimed": false, 00:30:06.274 "zoned": false, 00:30:06.274 "supported_io_types": { 00:30:06.274 "read": true, 00:30:06.274 "write": true, 00:30:06.274 "unmap": true, 00:30:06.274 "flush": false, 00:30:06.274 "reset": true, 00:30:06.274 "nvme_admin": false, 00:30:06.274 "nvme_io": false, 00:30:06.274 "nvme_io_md": false, 00:30:06.274 "write_zeroes": true, 00:30:06.274 "zcopy": false, 00:30:06.274 "get_zone_info": false, 00:30:06.274 "zone_management": false, 00:30:06.274 "zone_append": false, 00:30:06.274 "compare": false, 00:30:06.274 "compare_and_write": false, 00:30:06.274 "abort": false, 00:30:06.274 "seek_hole": true, 00:30:06.274 "seek_data": true, 00:30:06.274 "copy": false, 00:30:06.274 "nvme_iov_md": false 00:30:06.274 }, 00:30:06.274 "driver_specific": { 00:30:06.274 "lvol": { 00:30:06.274 "lvol_store_uuid": "da5e8ca8-d8a2-4681-a362-b25ed09f699c", 00:30:06.274 "base_bdev": "aio_bdev", 00:30:06.274 "thin_provision": false, 00:30:06.274 "num_allocated_clusters": 38, 00:30:06.274 "snapshot": false, 00:30:06.274 "clone": false, 00:30:06.274 "esnap_clone": false 00:30:06.274 } 00:30:06.274 } 00:30:06.274 } 00:30:06.274 ] 00:30:06.274 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:06.533 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:06.533 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:06.533 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:06.533 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:06.533 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:06.791 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:06.791 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:07.050 [2024-11-20 17:24:24.860582] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:07.050 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:07.050 request: 00:30:07.050 { 00:30:07.050 "uuid": "da5e8ca8-d8a2-4681-a362-b25ed09f699c", 00:30:07.050 "method": "bdev_lvol_get_lvstores", 00:30:07.050 "req_id": 1 00:30:07.050 } 00:30:07.050 Got JSON-RPC error response 00:30:07.050 response: 00:30:07.050 { 00:30:07.050 "code": -19, 00:30:07.050 "message": "No such device" 00:30:07.050 } 00:30:07.307 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:07.307 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:07.307 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:07.308 aio_bdev 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:07.308 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:07.583 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cba18497-1d1f-47eb-9a9e-da346d42cd5d -t 2000 00:30:07.862 [ 00:30:07.862 { 00:30:07.862 "name": "cba18497-1d1f-47eb-9a9e-da346d42cd5d", 00:30:07.862 "aliases": [ 00:30:07.862 "lvs/lvol" 00:30:07.862 ], 00:30:07.862 "product_name": "Logical Volume", 00:30:07.862 "block_size": 4096, 00:30:07.862 "num_blocks": 38912, 00:30:07.862 "uuid": "cba18497-1d1f-47eb-9a9e-da346d42cd5d", 00:30:07.862 "assigned_rate_limits": { 00:30:07.862 "rw_ios_per_sec": 0, 00:30:07.862 "rw_mbytes_per_sec": 0, 00:30:07.862 "r_mbytes_per_sec": 0, 00:30:07.862 "w_mbytes_per_sec": 0 00:30:07.862 }, 00:30:07.862 "claimed": false, 00:30:07.862 "zoned": false, 00:30:07.862 "supported_io_types": { 00:30:07.862 "read": true, 00:30:07.862 "write": true, 00:30:07.862 "unmap": true, 00:30:07.862 "flush": false, 00:30:07.862 "reset": true, 00:30:07.862 "nvme_admin": false, 00:30:07.862 "nvme_io": false, 00:30:07.862 "nvme_io_md": false, 00:30:07.862 "write_zeroes": true, 00:30:07.862 "zcopy": false, 00:30:07.862 "get_zone_info": false, 00:30:07.862 "zone_management": false, 00:30:07.862 "zone_append": false, 00:30:07.862 "compare": false, 00:30:07.862 "compare_and_write": false, 00:30:07.862 "abort": false, 00:30:07.862 "seek_hole": true, 00:30:07.862 "seek_data": true, 00:30:07.862 "copy": false, 00:30:07.862 "nvme_iov_md": false 00:30:07.862 }, 00:30:07.862 "driver_specific": { 00:30:07.862 "lvol": { 00:30:07.862 "lvol_store_uuid": "da5e8ca8-d8a2-4681-a362-b25ed09f699c", 00:30:07.862 "base_bdev": "aio_bdev", 00:30:07.862 "thin_provision": false, 00:30:07.862 "num_allocated_clusters": 38, 00:30:07.862 "snapshot": false, 00:30:07.862 "clone": false, 00:30:07.862 "esnap_clone": false 00:30:07.862 } 00:30:07.862 } 00:30:07.862 } 00:30:07.862 ] 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:07.862 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:08.174 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:08.174 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cba18497-1d1f-47eb-9a9e-da346d42cd5d 00:30:08.451 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da5e8ca8-d8a2-4681-a362-b25ed09f699c 00:30:08.451 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.710 00:30:08.710 real 0m17.095s 00:30:08.710 user 0m34.348s 00:30:08.710 sys 0m3.922s 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:08.710 ************************************ 00:30:08.710 END TEST lvs_grow_dirty 00:30:08.710 ************************************ 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:08.710 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:08.710 nvmf_trace.0 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.970 rmmod nvme_tcp 00:30:08.970 rmmod nvme_fabrics 00:30:08.970 rmmod nvme_keyring 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2694237 ']' 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2694237 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2694237 ']' 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2694237 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694237 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694237' 00:30:08.970 killing process with pid 2694237 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2694237 00:30:08.970 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2694237 00:30:09.229 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.229 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.230 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.135 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.135 00:30:11.135 real 0m41.884s 00:30:11.135 user 0m51.922s 00:30:11.135 sys 0m10.279s 00:30:11.135 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.135 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:11.135 ************************************ 00:30:11.135 END TEST nvmf_lvs_grow 00:30:11.135 ************************************ 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:11.393 ************************************ 00:30:11.393 START TEST nvmf_bdev_io_wait 00:30:11.393 ************************************ 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:11.393 * Looking for test storage... 00:30:11.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.393 --rc genhtml_branch_coverage=1 00:30:11.393 --rc genhtml_function_coverage=1 00:30:11.393 --rc genhtml_legend=1 00:30:11.393 --rc geninfo_all_blocks=1 00:30:11.393 --rc geninfo_unexecuted_blocks=1 00:30:11.393 00:30:11.393 ' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.393 --rc genhtml_branch_coverage=1 00:30:11.393 --rc genhtml_function_coverage=1 00:30:11.393 --rc genhtml_legend=1 00:30:11.393 --rc geninfo_all_blocks=1 00:30:11.393 --rc geninfo_unexecuted_blocks=1 00:30:11.393 00:30:11.393 ' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.393 --rc genhtml_branch_coverage=1 00:30:11.393 --rc genhtml_function_coverage=1 00:30:11.393 --rc genhtml_legend=1 00:30:11.393 --rc geninfo_all_blocks=1 00:30:11.393 --rc geninfo_unexecuted_blocks=1 00:30:11.393 00:30:11.393 ' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:11.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.393 --rc genhtml_branch_coverage=1 00:30:11.393 --rc genhtml_function_coverage=1 00:30:11.393 --rc genhtml_legend=1 00:30:11.393 --rc geninfo_all_blocks=1 00:30:11.393 --rc geninfo_unexecuted_blocks=1 00:30:11.393 00:30:11.393 ' 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.393 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.652 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.653 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.224 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:18.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:18.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:18.225 Found net devices under 0000:86:00.0: cvl_0_0 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:18.225 Found net devices under 0000:86:00.1: cvl_0_1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:30:18.225 00:30:18.225 --- 10.0.0.2 ping statistics --- 00:30:18.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.225 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:30:18.225 00:30:18.225 --- 10.0.0.1 ping statistics --- 00:30:18.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.225 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:18.225 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2698443 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2698443 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2698443 ']' 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 [2024-11-20 17:24:35.398933] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:18.226 [2024-11-20 17:24:35.399877] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:18.226 [2024-11-20 17:24:35.399912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.226 [2024-11-20 17:24:35.478264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.226 [2024-11-20 17:24:35.522195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.226 [2024-11-20 17:24:35.522236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.226 [2024-11-20 17:24:35.522246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.226 [2024-11-20 17:24:35.522251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.226 [2024-11-20 17:24:35.522256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.226 [2024-11-20 17:24:35.523690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.226 [2024-11-20 17:24:35.523797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.226 [2024-11-20 17:24:35.523902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.226 [2024-11-20 17:24:35.523903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.226 [2024-11-20 17:24:35.524247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 [2024-11-20 17:24:35.649496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:18.226 [2024-11-20 17:24:35.649845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:18.226 [2024-11-20 17:24:35.650345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:18.226 [2024-11-20 17:24:35.650371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 [2024-11-20 17:24:35.660626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 Malloc0 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:18.226 [2024-11-20 17:24:35.732903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2698490 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2698492 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:18.226 { 00:30:18.226 "params": { 00:30:18.226 "name": "Nvme$subsystem", 00:30:18.226 "trtype": "$TEST_TRANSPORT", 00:30:18.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.226 "adrfam": "ipv4", 00:30:18.226 "trsvcid": "$NVMF_PORT", 00:30:18.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.226 "hdgst": ${hdgst:-false}, 00:30:18.226 "ddgst": ${ddgst:-false} 00:30:18.226 }, 00:30:18.226 "method": "bdev_nvme_attach_controller" 00:30:18.226 } 00:30:18.226 EOF 00:30:18.226 )") 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2698494 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:18.226 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:18.226 { 00:30:18.226 "params": { 00:30:18.226 "name": "Nvme$subsystem", 00:30:18.226 "trtype": "$TEST_TRANSPORT", 00:30:18.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.226 "adrfam": "ipv4", 00:30:18.226 "trsvcid": "$NVMF_PORT", 00:30:18.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.226 "hdgst": ${hdgst:-false}, 00:30:18.226 "ddgst": ${ddgst:-false} 00:30:18.226 }, 00:30:18.226 "method": "bdev_nvme_attach_controller" 00:30:18.227 } 00:30:18.227 EOF 00:30:18.227 )") 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2698497 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:18.227 { 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme$subsystem", 00:30:18.227 "trtype": "$TEST_TRANSPORT", 00:30:18.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "$NVMF_PORT", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.227 "hdgst": ${hdgst:-false}, 00:30:18.227 "ddgst": ${ddgst:-false} 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 } 00:30:18.227 EOF 00:30:18.227 )") 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:18.227 { 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme$subsystem", 00:30:18.227 "trtype": "$TEST_TRANSPORT", 00:30:18.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "$NVMF_PORT", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.227 "hdgst": ${hdgst:-false}, 00:30:18.227 "ddgst": ${ddgst:-false} 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 } 00:30:18.227 EOF 00:30:18.227 )") 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2698490 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme1", 00:30:18.227 "trtype": "tcp", 00:30:18.227 "traddr": "10.0.0.2", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "4420", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.227 "hdgst": false, 00:30:18.227 "ddgst": false 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 }' 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme1", 00:30:18.227 "trtype": "tcp", 00:30:18.227 "traddr": "10.0.0.2", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "4420", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.227 "hdgst": false, 00:30:18.227 "ddgst": false 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 }' 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme1", 00:30:18.227 "trtype": "tcp", 00:30:18.227 "traddr": "10.0.0.2", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "4420", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.227 "hdgst": false, 00:30:18.227 "ddgst": false 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 }' 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:18.227 17:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:18.227 "params": { 00:30:18.227 "name": "Nvme1", 00:30:18.227 "trtype": "tcp", 00:30:18.227 "traddr": "10.0.0.2", 00:30:18.227 "adrfam": "ipv4", 00:30:18.227 "trsvcid": "4420", 00:30:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.227 "hdgst": false, 00:30:18.227 "ddgst": false 00:30:18.227 }, 00:30:18.227 "method": "bdev_nvme_attach_controller" 00:30:18.227 }' 00:30:18.227 [2024-11-20 17:24:35.784145] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:18.227 [2024-11-20 17:24:35.784197] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:18.227 [2024-11-20 17:24:35.785340] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:18.227 [2024-11-20 17:24:35.785382] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:18.227 [2024-11-20 17:24:35.789389] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:18.227 [2024-11-20 17:24:35.789437] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:18.227 [2024-11-20 17:24:35.798493] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:18.227 [2024-11-20 17:24:35.798559] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:18.227 [2024-11-20 17:24:35.970569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.227 [2024-11-20 17:24:36.012917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.227 [2024-11-20 17:24:36.064312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.227 [2024-11-20 17:24:36.104652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:18.227 [2024-11-20 17:24:36.156906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.227 [2024-11-20 17:24:36.200321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.227 [2024-11-20 17:24:36.210426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:18.227 [2024-11-20 17:24:36.243188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:18.485 Running I/O for 1 seconds... 00:30:18.485 Running I/O for 1 seconds... 00:30:18.485 Running I/O for 1 seconds... 00:30:18.485 Running I/O for 1 seconds... 00:30:19.416 14303.00 IOPS, 55.87 MiB/s [2024-11-20T16:24:37.459Z] 7145.00 IOPS, 27.91 MiB/s 00:30:19.416 Latency(us) 00:30:19.416 [2024-11-20T16:24:37.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.416 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:19.416 Nvme1n1 : 1.01 14347.54 56.05 0.00 0.00 8894.15 3744.91 12170.97 00:30:19.416 [2024-11-20T16:24:37.459Z] =================================================================================================================== 00:30:19.416 [2024-11-20T16:24:37.459Z] Total : 14347.54 56.05 0.00 0.00 8894.15 3744.91 12170.97 00:30:19.416 00:30:19.416 Latency(us) 00:30:19.416 [2024-11-20T16:24:37.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.416 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:19.416 Nvme1n1 : 1.01 7201.81 28.13 0.00 0.00 17661.75 1490.16 29584.82 00:30:19.416 [2024-11-20T16:24:37.459Z] =================================================================================================================== 00:30:19.416 [2024-11-20T16:24:37.459Z] Total : 7201.81 28.13 0.00 0.00 17661.75 1490.16 29584.82 00:30:19.416 246064.00 IOPS, 961.19 MiB/s 00:30:19.416 Latency(us) 00:30:19.416 [2024-11-20T16:24:37.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.416 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:19.416 Nvme1n1 : 1.00 245688.97 959.72 0.00 0.00 518.43 222.35 1497.97 00:30:19.416 [2024-11-20T16:24:37.459Z] =================================================================================================================== 00:30:19.416 [2024-11-20T16:24:37.459Z] Total : 245688.97 959.72 0.00 0.00 518.43 222.35 1497.97 00:30:19.675 7971.00 IOPS, 31.14 MiB/s 00:30:19.675 Latency(us) 00:30:19.675 [2024-11-20T16:24:37.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.675 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:19.675 Nvme1n1 : 1.00 8083.18 31.57 0.00 0.00 15803.17 2590.23 35202.19 00:30:19.675 [2024-11-20T16:24:37.718Z] =================================================================================================================== 00:30:19.675 [2024-11-20T16:24:37.718Z] Total : 8083.18 31.57 0.00 0.00 15803.17 2590.23 35202.19 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2698492 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2698494 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2698497 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.675 rmmod nvme_tcp 00:30:19.675 rmmod nvme_fabrics 00:30:19.675 rmmod nvme_keyring 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2698443 ']' 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2698443 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2698443 ']' 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2698443 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698443 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698443' 00:30:19.675 killing process with pid 2698443 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2698443 00:30:19.675 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2698443 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.934 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.479 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:22.479 00:30:22.479 real 0m10.683s 00:30:22.479 user 0m14.885s 00:30:22.479 sys 0m6.316s 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:22.480 ************************************ 00:30:22.480 END TEST nvmf_bdev_io_wait 00:30:22.480 ************************************ 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.480 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:22.480 ************************************ 00:30:22.480 START TEST nvmf_queue_depth 00:30:22.480 ************************************ 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:22.480 * Looking for test storage... 00:30:22.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:22.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.480 --rc genhtml_branch_coverage=1 00:30:22.480 --rc genhtml_function_coverage=1 00:30:22.480 --rc genhtml_legend=1 00:30:22.480 --rc geninfo_all_blocks=1 00:30:22.480 --rc geninfo_unexecuted_blocks=1 00:30:22.480 00:30:22.480 ' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:22.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.480 --rc genhtml_branch_coverage=1 00:30:22.480 --rc genhtml_function_coverage=1 00:30:22.480 --rc genhtml_legend=1 00:30:22.480 --rc geninfo_all_blocks=1 00:30:22.480 --rc geninfo_unexecuted_blocks=1 00:30:22.480 00:30:22.480 ' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:22.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.480 --rc genhtml_branch_coverage=1 00:30:22.480 --rc genhtml_function_coverage=1 00:30:22.480 --rc genhtml_legend=1 00:30:22.480 --rc geninfo_all_blocks=1 00:30:22.480 --rc geninfo_unexecuted_blocks=1 00:30:22.480 00:30:22.480 ' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:22.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.480 --rc genhtml_branch_coverage=1 00:30:22.480 --rc genhtml_function_coverage=1 00:30:22.480 --rc genhtml_legend=1 00:30:22.480 --rc geninfo_all_blocks=1 00:30:22.480 --rc geninfo_unexecuted_blocks=1 00:30:22.480 00:30:22.480 ' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.480 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:22.481 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.758 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.759 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:27.759 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:28.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:28.019 Found net devices under 0000:86:00.0: cvl_0_0 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:28.019 Found net devices under 0000:86:00.1: cvl_0_1 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.019 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.020 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.020 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.020 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.020 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:30:28.020 00:30:28.020 --- 10.0.0.2 ping statistics --- 00:30:28.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.020 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:28.020 00:30:28.020 --- 10.0.0.1 ping statistics --- 00:30:28.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.020 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.020 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2702268 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2702268 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2702268 ']' 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.280 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.280 [2024-11-20 17:24:46.153490] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.280 [2024-11-20 17:24:46.154454] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:28.280 [2024-11-20 17:24:46.154489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.280 [2024-11-20 17:24:46.236944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.280 [2024-11-20 17:24:46.277322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.280 [2024-11-20 17:24:46.277357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.280 [2024-11-20 17:24:46.277366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.280 [2024-11-20 17:24:46.277372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.280 [2024-11-20 17:24:46.277377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.280 [2024-11-20 17:24:46.277911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.540 [2024-11-20 17:24:46.346000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.540 [2024-11-20 17:24:46.346241] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 [2024-11-20 17:24:46.414644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 Malloc0 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 [2024-11-20 17:24:46.490568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2702289 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2702289 /var/tmp/bdevperf.sock 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2702289 ']' 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.540 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.540 [2024-11-20 17:24:46.541363] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:28.540 [2024-11-20 17:24:46.541405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2702289 ] 00:30:28.799 [2024-11-20 17:24:46.615147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.799 [2024-11-20 17:24:46.657514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.799 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.799 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:28.799 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:28.799 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.799 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:29.058 NVMe0n1 00:30:29.058 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.058 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:29.058 Running I/O for 10 seconds... 00:30:30.932 12095.00 IOPS, 47.25 MiB/s [2024-11-20T16:24:50.353Z] 12221.50 IOPS, 47.74 MiB/s [2024-11-20T16:24:51.290Z] 12289.67 IOPS, 48.01 MiB/s [2024-11-20T16:24:52.230Z] 12290.00 IOPS, 48.01 MiB/s [2024-11-20T16:24:53.167Z] 12339.00 IOPS, 48.20 MiB/s [2024-11-20T16:24:54.105Z] 12442.67 IOPS, 48.60 MiB/s [2024-11-20T16:24:55.041Z] 12446.29 IOPS, 48.62 MiB/s [2024-11-20T16:24:55.979Z] 12525.00 IOPS, 48.93 MiB/s [2024-11-20T16:24:57.357Z] 12522.67 IOPS, 48.92 MiB/s [2024-11-20T16:24:57.357Z] 12558.70 IOPS, 49.06 MiB/s 00:30:39.314 Latency(us) 00:30:39.314 [2024-11-20T16:24:57.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.314 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:39.314 Verification LBA range: start 0x0 length 0x4000 00:30:39.314 NVMe0n1 : 10.06 12577.37 49.13 0.00 0.00 81121.12 19099.06 50681.17 00:30:39.314 [2024-11-20T16:24:57.357Z] =================================================================================================================== 00:30:39.314 [2024-11-20T16:24:57.357Z] Total : 12577.37 49.13 0.00 0.00 81121.12 19099.06 50681.17 00:30:39.314 { 00:30:39.314 "results": [ 00:30:39.314 { 00:30:39.314 "job": "NVMe0n1", 00:30:39.314 "core_mask": "0x1", 00:30:39.314 "workload": "verify", 00:30:39.314 "status": "finished", 00:30:39.314 "verify_range": { 00:30:39.314 "start": 0, 00:30:39.314 "length": 16384 00:30:39.314 }, 00:30:39.314 "queue_depth": 1024, 00:30:39.314 "io_size": 4096, 00:30:39.314 "runtime": 10.060209, 00:30:39.314 "iops": 12577.372895533284, 00:30:39.314 "mibps": 49.13036287317689, 00:30:39.314 "io_failed": 0, 00:30:39.314 "io_timeout": 0, 00:30:39.314 "avg_latency_us": 81121.12308294109, 00:30:39.314 "min_latency_us": 19099.062857142857, 00:30:39.314 "max_latency_us": 50681.17333333333 00:30:39.314 } 00:30:39.314 ], 00:30:39.314 "core_count": 1 00:30:39.314 } 00:30:39.314 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2702289 00:30:39.314 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2702289 ']' 00:30:39.314 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2702289 00:30:39.314 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:39.314 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2702289 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2702289' 00:30:39.315 killing process with pid 2702289 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2702289 00:30:39.315 Received shutdown signal, test time was about 10.000000 seconds 00:30:39.315 00:30:39.315 Latency(us) 00:30:39.315 [2024-11-20T16:24:57.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.315 [2024-11-20T16:24:57.358Z] =================================================================================================================== 00:30:39.315 [2024-11-20T16:24:57.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2702289 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.315 rmmod nvme_tcp 00:30:39.315 rmmod nvme_fabrics 00:30:39.315 rmmod nvme_keyring 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2702268 ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2702268 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2702268 ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2702268 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.315 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2702268 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2702268' 00:30:39.574 killing process with pid 2702268 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2702268 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2702268 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.574 17:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.110 00:30:42.110 real 0m19.642s 00:30:42.110 user 0m22.596s 00:30:42.110 sys 0m6.315s 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:42.110 ************************************ 00:30:42.110 END TEST nvmf_queue_depth 00:30:42.110 ************************************ 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:42.110 ************************************ 00:30:42.110 START TEST nvmf_target_multipath 00:30:42.110 ************************************ 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:42.110 * Looking for test storage... 00:30:42.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.110 --rc genhtml_branch_coverage=1 00:30:42.110 --rc genhtml_function_coverage=1 00:30:42.110 --rc genhtml_legend=1 00:30:42.110 --rc geninfo_all_blocks=1 00:30:42.110 --rc geninfo_unexecuted_blocks=1 00:30:42.110 00:30:42.110 ' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.110 --rc genhtml_branch_coverage=1 00:30:42.110 --rc genhtml_function_coverage=1 00:30:42.110 --rc genhtml_legend=1 00:30:42.110 --rc geninfo_all_blocks=1 00:30:42.110 --rc geninfo_unexecuted_blocks=1 00:30:42.110 00:30:42.110 ' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.110 --rc genhtml_branch_coverage=1 00:30:42.110 --rc genhtml_function_coverage=1 00:30:42.110 --rc genhtml_legend=1 00:30:42.110 --rc geninfo_all_blocks=1 00:30:42.110 --rc geninfo_unexecuted_blocks=1 00:30:42.110 00:30:42.110 ' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.110 --rc genhtml_branch_coverage=1 00:30:42.110 --rc genhtml_function_coverage=1 00:30:42.110 --rc genhtml_legend=1 00:30:42.110 --rc geninfo_all_blocks=1 00:30:42.110 --rc geninfo_unexecuted_blocks=1 00:30:42.110 00:30:42.110 ' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.110 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.111 17:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:48.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:48.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:48.684 Found net devices under 0000:86:00.0: cvl_0_0 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:48.684 Found net devices under 0000:86:00.1: cvl_0_1 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.684 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:30:48.685 00:30:48.685 --- 10.0.0.2 ping statistics --- 00:30:48.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.685 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:30:48.685 00:30:48.685 --- 10.0.0.1 ping statistics --- 00:30:48.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.685 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:48.685 only one NIC for nvmf test 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.685 rmmod nvme_tcp 00:30:48.685 rmmod nvme_fabrics 00:30:48.685 rmmod nvme_keyring 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.685 17:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.064 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.065 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.065 00:30:50.065 real 0m8.276s 00:30:50.065 user 0m1.784s 00:30:50.065 sys 0m4.488s 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:50.065 ************************************ 00:30:50.065 END TEST nvmf_target_multipath 00:30:50.065 ************************************ 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.065 ************************************ 00:30:50.065 START TEST nvmf_zcopy 00:30:50.065 ************************************ 00:30:50.065 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:50.325 * Looking for test storage... 00:30:50.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:50.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.325 --rc genhtml_branch_coverage=1 00:30:50.325 --rc genhtml_function_coverage=1 00:30:50.325 --rc genhtml_legend=1 00:30:50.325 --rc geninfo_all_blocks=1 00:30:50.325 --rc geninfo_unexecuted_blocks=1 00:30:50.325 00:30:50.325 ' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:50.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.325 --rc genhtml_branch_coverage=1 00:30:50.325 --rc genhtml_function_coverage=1 00:30:50.325 --rc genhtml_legend=1 00:30:50.325 --rc geninfo_all_blocks=1 00:30:50.325 --rc geninfo_unexecuted_blocks=1 00:30:50.325 00:30:50.325 ' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:50.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.325 --rc genhtml_branch_coverage=1 00:30:50.325 --rc genhtml_function_coverage=1 00:30:50.325 --rc genhtml_legend=1 00:30:50.325 --rc geninfo_all_blocks=1 00:30:50.325 --rc geninfo_unexecuted_blocks=1 00:30:50.325 00:30:50.325 ' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:50.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.325 --rc genhtml_branch_coverage=1 00:30:50.325 --rc genhtml_function_coverage=1 00:30:50.325 --rc genhtml_legend=1 00:30:50.325 --rc geninfo_all_blocks=1 00:30:50.325 --rc geninfo_unexecuted_blocks=1 00:30:50.325 00:30:50.325 ' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.325 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.326 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:56.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:56.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.902 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:56.903 Found net devices under 0000:86:00.0: cvl_0_0 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:56.903 Found net devices under 0000:86:00.1: cvl_0_1 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.903 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:30:56.903 00:30:56.903 --- 10.0.0.2 ping statistics --- 00:30:56.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.903 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:30:56.903 00:30:56.903 --- 10.0.0.1 ping statistics --- 00:30:56.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.903 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2710933 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2710933 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2710933 ']' 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.903 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.903 [2024-11-20 17:25:14.276140] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.903 [2024-11-20 17:25:14.277028] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:56.903 [2024-11-20 17:25:14.277062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.904 [2024-11-20 17:25:14.357217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.904 [2024-11-20 17:25:14.400040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.904 [2024-11-20 17:25:14.400070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.904 [2024-11-20 17:25:14.400077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.904 [2024-11-20 17:25:14.400083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.904 [2024-11-20 17:25:14.400088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.904 [2024-11-20 17:25:14.400656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.904 [2024-11-20 17:25:14.467817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.904 [2024-11-20 17:25:14.468040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 [2024-11-20 17:25:14.537327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 [2024-11-20 17:25:14.565534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 malloc0 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:56.904 { 00:30:56.904 "params": { 00:30:56.904 "name": "Nvme$subsystem", 00:30:56.904 "trtype": "$TEST_TRANSPORT", 00:30:56.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.904 "adrfam": "ipv4", 00:30:56.904 "trsvcid": "$NVMF_PORT", 00:30:56.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.904 "hdgst": ${hdgst:-false}, 00:30:56.904 "ddgst": ${ddgst:-false} 00:30:56.904 }, 00:30:56.904 "method": "bdev_nvme_attach_controller" 00:30:56.904 } 00:30:56.904 EOF 00:30:56.904 )") 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:56.904 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:56.904 "params": { 00:30:56.904 "name": "Nvme1", 00:30:56.904 "trtype": "tcp", 00:30:56.904 "traddr": "10.0.0.2", 00:30:56.904 "adrfam": "ipv4", 00:30:56.904 "trsvcid": "4420", 00:30:56.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:56.904 "hdgst": false, 00:30:56.904 "ddgst": false 00:30:56.904 }, 00:30:56.904 "method": "bdev_nvme_attach_controller" 00:30:56.904 }' 00:30:56.904 [2024-11-20 17:25:14.658921] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:30:56.904 [2024-11-20 17:25:14.658961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711117 ] 00:30:56.904 [2024-11-20 17:25:14.731844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.904 [2024-11-20 17:25:14.772150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.163 Running I/O for 10 seconds... 00:30:59.477 8514.00 IOPS, 66.52 MiB/s [2024-11-20T16:25:18.457Z] 8583.50 IOPS, 67.06 MiB/s [2024-11-20T16:25:19.394Z] 8609.33 IOPS, 67.26 MiB/s [2024-11-20T16:25:20.331Z] 8612.25 IOPS, 67.28 MiB/s [2024-11-20T16:25:21.267Z] 8610.20 IOPS, 67.27 MiB/s [2024-11-20T16:25:22.205Z] 8607.17 IOPS, 67.24 MiB/s [2024-11-20T16:25:23.141Z] 8614.71 IOPS, 67.30 MiB/s [2024-11-20T16:25:24.163Z] 8619.12 IOPS, 67.34 MiB/s [2024-11-20T16:25:25.552Z] 8624.78 IOPS, 67.38 MiB/s [2024-11-20T16:25:25.552Z] 8625.50 IOPS, 67.39 MiB/s 00:31:07.509 Latency(us) 00:31:07.509 [2024-11-20T16:25:25.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:07.509 Verification LBA range: start 0x0 length 0x1000 00:31:07.509 Nvme1n1 : 10.01 8627.52 67.40 0.00 0.00 14794.25 2293.76 21595.67 00:31:07.509 [2024-11-20T16:25:25.552Z] =================================================================================================================== 00:31:07.509 [2024-11-20T16:25:25.552Z] Total : 8627.52 67.40 0.00 0.00 14794.25 2293.76 21595.67 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2712786 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:07.509 { 00:31:07.509 "params": { 00:31:07.509 "name": "Nvme$subsystem", 00:31:07.509 "trtype": "$TEST_TRANSPORT", 00:31:07.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.509 "adrfam": "ipv4", 00:31:07.509 "trsvcid": "$NVMF_PORT", 00:31:07.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.509 "hdgst": ${hdgst:-false}, 00:31:07.509 "ddgst": ${ddgst:-false} 00:31:07.509 }, 00:31:07.509 "method": "bdev_nvme_attach_controller" 00:31:07.509 } 00:31:07.509 EOF 00:31:07.509 )") 00:31:07.509 [2024-11-20 17:25:25.284991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.509 [2024-11-20 17:25:25.285028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:07.509 17:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:07.509 "params": { 00:31:07.509 "name": "Nvme1", 00:31:07.509 "trtype": "tcp", 00:31:07.509 "traddr": "10.0.0.2", 00:31:07.509 "adrfam": "ipv4", 00:31:07.509 "trsvcid": "4420", 00:31:07.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.509 "hdgst": false, 00:31:07.509 "ddgst": false 00:31:07.509 }, 00:31:07.509 "method": "bdev_nvme_attach_controller" 00:31:07.510 }' 00:31:07.510 [2024-11-20 17:25:25.296954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.296966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.308951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.308960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.320948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.320962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.326714] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:31:07.510 [2024-11-20 17:25:25.326754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712786 ] 00:31:07.510 [2024-11-20 17:25:25.332950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.332960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.344948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.344957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.356947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.356956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.368947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.368956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.380947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.380955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.392946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.392954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.401150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.510 [2024-11-20 17:25:25.404948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.404957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.416951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.416967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.428954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.428969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.440961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.440974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.441943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.510 [2024-11-20 17:25:25.452955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.452968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.464954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.464972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.476951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.476964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.488947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.488958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.500950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.500961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.512948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.512958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.524956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.524972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.536953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.536969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.510 [2024-11-20 17:25:25.548954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.510 [2024-11-20 17:25:25.548967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.560950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.560962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.572947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.572955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.584947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.584956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.596950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.596962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.608950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.608964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.620955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.620972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.632952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.632966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 Running I/O for 5 seconds... 00:31:07.769 [2024-11-20 17:25:25.648629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.648649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.662796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.662814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.677036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.677054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.689899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.689916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.704544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.704563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.718887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.718905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.733505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.733523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.749004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.749023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.762252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.762279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.776748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.776766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.790444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.790462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.769 [2024-11-20 17:25:25.805324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.769 [2024-11-20 17:25:25.805342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.821706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.821726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.836262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.836280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.850871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.850891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.864843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.864862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.878446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.878466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.892928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.892947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.905704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.905723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.918950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.918969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.933584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.933603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.949517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.949540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.964527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.964546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.978950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.978968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:25.993191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:25.993216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:26.005710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:26.005728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:26.018749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:26.018768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:26.033279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:26.033301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:26.044766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:26.044785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.029 [2024-11-20 17:25:26.058657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.029 [2024-11-20 17:25:26.058675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.073101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.073120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.084147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.084166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.098328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.098347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.113296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.113313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.127412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.127431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.141552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.141570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.156762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.156781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.168546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.168565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.183119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.183138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.197605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.197623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.212823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.212842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.226762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.226781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.241259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.241277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.256463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.256482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.269339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.269357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.282975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.282993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.297316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.297337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.312851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.312869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.287 [2024-11-20 17:25:26.327060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.287 [2024-11-20 17:25:26.327077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.341486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.341503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.356870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.356889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.370480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.370497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.385666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.385683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.400666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.400684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.413940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.413957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.429254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.429271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.445007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.445025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.456998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.457016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.470754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.470772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.485376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.485394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.500318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.500336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.514844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.514867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.529358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.529375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.541806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.541824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.554604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.554621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.569536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.569553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.545 [2024-11-20 17:25:26.584248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.545 [2024-11-20 17:25:26.584267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.598271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.598288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.612841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.612859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.626854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.626872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 17085.00 IOPS, 133.48 MiB/s [2024-11-20T16:25:26.848Z] [2024-11-20 17:25:26.640884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.640903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.654434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.654452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.668939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.668958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.682194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.682216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.694367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.694385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.709416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.709433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.725232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.725251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.736609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.736627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.750596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.750614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.764917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.764937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.777494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.777511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.790564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.790582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.805074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.805092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.815676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.815693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.805 [2024-11-20 17:25:26.829870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.805 [2024-11-20 17:25:26.829888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.845030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.845049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.858851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.858869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.873487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.873505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.889085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.889104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.900403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.900422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.915044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.915062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.929690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.929708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.944851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.944868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.959055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.959073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.973564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.973582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:26.989004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:26.989022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.001619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.001636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.014511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.014528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.029142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.029160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.041161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.041178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.054475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.054493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.069033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.069051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.082689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.082707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.064 [2024-11-20 17:25:27.097594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.064 [2024-11-20 17:25:27.097611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.112166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.112185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.126774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.126792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.141318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.141335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.157122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.157140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.168828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.168846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.182455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.182472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.197419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.197437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.213170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.213187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.223623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.223641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.238446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.238464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.253348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.253365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.268808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.268827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.280749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.280769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.294495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.294514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.309107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.309127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.320249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.320268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.334602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.334622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.324 [2024-11-20 17:25:27.349510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.324 [2024-11-20 17:25:27.349532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.583 [2024-11-20 17:25:27.364832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.583 [2024-11-20 17:25:27.364852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.583 [2024-11-20 17:25:27.378415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.378433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.389295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.389312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.402907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.402925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.417651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.417669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.432493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.432512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.446764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.446782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.461496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.461514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.476560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.476580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.491046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.491065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.505149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.505168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.516129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.516147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.530326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.530345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.544995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.545013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.557568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.557586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.570390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.570407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.585150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.585168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.596381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.596399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.584 [2024-11-20 17:25:27.610497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.584 [2024-11-20 17:25:27.610519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.843 [2024-11-20 17:25:27.625237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.843 [2024-11-20 17:25:27.625256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.843 [2024-11-20 17:25:27.637665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.843 [2024-11-20 17:25:27.637683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 17043.00 IOPS, 133.15 MiB/s [2024-11-20T16:25:27.887Z] [2024-11-20 17:25:27.650930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.650948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.665358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.665376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.680556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.680575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.694942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.694960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.709334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.709351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.722539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.722556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.736783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.736806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.748166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.748184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.762385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.762403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.776393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.776410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.790394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.790412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.800998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.801015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.814659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.814677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.829309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.829337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.845494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.845514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.858978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.858996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.872998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.873020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.844 [2024-11-20 17:25:27.883540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.844 [2024-11-20 17:25:27.883558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.898056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.898076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.912474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.912495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.926755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.926773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.940832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.940850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.953907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.953924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.968717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.968735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.982752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.982770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:27.997192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:27.997217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.007959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.007977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.022195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.022219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.036863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.036882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.050803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.050821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.065028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.065045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.078253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.078271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.093240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.093258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.108626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.108645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.122765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.122782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.103 [2024-11-20 17:25:28.137267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.103 [2024-11-20 17:25:28.137283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.153063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.153080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.166791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.166808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.180971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.180989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.195010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.195028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.209578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.209595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.224553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.224571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.238673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.238690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.253020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.253038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.266133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.266151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.281280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.281297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.294007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.294024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.309078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.309096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.321787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.321805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.334713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.334731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.349328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.349354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.364391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.364409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.378694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.378712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.362 [2024-11-20 17:25:28.392891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.362 [2024-11-20 17:25:28.392909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.621 [2024-11-20 17:25:28.406781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.406799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.421329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.421347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.436655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.436674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.449458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.449475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.462449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.462467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.477263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.477281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.490331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.490349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.504572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.504589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.518714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.518730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.532665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.532683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.545969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.545986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.560620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.560638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.574910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.574928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.589294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.589311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.601499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.601516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.614620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.614638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.629157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.629175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 [2024-11-20 17:25:28.640009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.640027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.622 17112.67 IOPS, 133.69 MiB/s [2024-11-20T16:25:28.665Z] [2024-11-20 17:25:28.654628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.622 [2024-11-20 17:25:28.654650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.669230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.669249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.680222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.680240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.694776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.694794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.709172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.709189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.720520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.720538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.734577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.734596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.749009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.749028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.759968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.759987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.774475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.774496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.788943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.788963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.801484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.801502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.814532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.814551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.829117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.829135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.839981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.840000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.854421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.854439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.869135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.869154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.879759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.879776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.894120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.894138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.909185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.909217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.882 [2024-11-20 17:25:28.922171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.882 [2024-11-20 17:25:28.922191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:28.936631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:28.936650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:28.949988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:28.950007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:28.964920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:28.964940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:28.978541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:28.978559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:28.993338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:28.993356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.008682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.008700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.022890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.022908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.037440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.037458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.052451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.052470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.066870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.066888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.081148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.081166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.093452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.093470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.107020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.107039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.121674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.121692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.136755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.136774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.150969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.150989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.165277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.165296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.142 [2024-11-20 17:25:29.180497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.142 [2024-11-20 17:25:29.180519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.194664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.194682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.209349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.209366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.224500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.224518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.238758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.238776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.252774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.252792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.265835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.265853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.280512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.280530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.295016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.295034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.309477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.309494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.324357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.324375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.338540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.338558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.353429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.353445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.368737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.368755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.382109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.382127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.396881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.396901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.408102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.408120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.422694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.422711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.401 [2024-11-20 17:25:29.437079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.401 [2024-11-20 17:25:29.437096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.449715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.449738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.464796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.464814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.476882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.476901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.490106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.490123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.501326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.501343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.516388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.516406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.530651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.530669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.544948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.544966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.557410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.557427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.570516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.570533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.581358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.581375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.594747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.594765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.609726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.609744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.625376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.625393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.641117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.641136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 17103.25 IOPS, 133.62 MiB/s [2024-11-20T16:25:29.703Z] [2024-11-20 17:25:29.654835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.654853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.669270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.669288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.684707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.684725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.660 [2024-11-20 17:25:29.699280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.660 [2024-11-20 17:25:29.699298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.713463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.713480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.728689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.728707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.742780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.742799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.757108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.757126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.770031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.770048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.785020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.785038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.797388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.797405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.810269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.810286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.824692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.824710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.839229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.839248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.853426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.853444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.866655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.866673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.881554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.881572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.897271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.897289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.912477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.912495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.926982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.927001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.940625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.940643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.920 [2024-11-20 17:25:29.954290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.920 [2024-11-20 17:25:29.954309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:29.969464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:29.969481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:29.984623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:29.984641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:29.999182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:29.999200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.014524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.014575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.029828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.029845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.045316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.045334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.060711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.060745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.073562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.073579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.088857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.088875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.103002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.103020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.117655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.117672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.132691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.179 [2024-11-20 17:25:30.132710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.179 [2024-11-20 17:25:30.146557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.146575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.180 [2024-11-20 17:25:30.161694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.161712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.180 [2024-11-20 17:25:30.177171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.177191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.180 [2024-11-20 17:25:30.189488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.189506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.180 [2024-11-20 17:25:30.204681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.204701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.180 [2024-11-20 17:25:30.219335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.180 [2024-11-20 17:25:30.219355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.234177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.234196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.248860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.248879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.261847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.261866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.276639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.276658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.291453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.291472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.306061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.306079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.320693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.320712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.331957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.331976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.346927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.346946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.361842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.361860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.376716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.376735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.390484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.390503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.405451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.405470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.420638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.420657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.434225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.434244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.448900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.448918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.460278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.460296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.439 [2024-11-20 17:25:30.474990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.439 [2024-11-20 17:25:30.475009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.489847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.489865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.504807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.504826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.519312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.519336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.533828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.533847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.548811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.548830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.562943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.562962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.577493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.577511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.593376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.593395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.608970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.608988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.621854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.621871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.637341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.637369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 17062.40 IOPS, 133.30 MiB/s [2024-11-20T16:25:30.742Z] [2024-11-20 17:25:30.648974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.648992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 00:31:12.699 Latency(us) 00:31:12.699 [2024-11-20T16:25:30.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.699 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:12.699 Nvme1n1 : 5.01 17065.91 133.33 0.00 0.00 7493.65 1880.26 13232.03 00:31:12.699 [2024-11-20T16:25:30.742Z] =================================================================================================================== 00:31:12.699 [2024-11-20T16:25:30.742Z] Total : 17065.91 133.33 0.00 0.00 7493.65 1880.26 13232.03 00:31:12.699 [2024-11-20 17:25:30.660954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.660972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.672953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.672967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.684967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.684984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.696958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.696974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.708957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.708970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.720952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.720965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.699 [2024-11-20 17:25:30.732953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.699 [2024-11-20 17:25:30.732973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.744952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.744967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.756950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.756963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.768948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.768957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.780950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.780959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.792953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.792964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 [2024-11-20 17:25:30.804949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.958 [2024-11-20 17:25:30.804959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2712786) - No such process 00:31:12.958 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2712786 00:31:12.958 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.958 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.958 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.958 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.959 delay0 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.959 17:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:12.959 [2024-11-20 17:25:30.950908] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:19.531 Initializing NVMe Controllers 00:31:19.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:19.531 Initialization complete. Launching workers. 00:31:19.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 10005 00:31:19.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10234, failed to submit 63 00:31:19.531 success 10105, unsuccessful 129, failed 0 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.531 rmmod nvme_tcp 00:31:19.531 rmmod nvme_fabrics 00:31:19.531 rmmod nvme_keyring 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2710933 ']' 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2710933 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2710933 ']' 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2710933 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710933 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710933' 00:31:19.531 killing process with pid 2710933 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2710933 00:31:19.531 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2710933 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.791 17:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.330 00:31:22.330 real 0m31.677s 00:31:22.330 user 0m40.859s 00:31:22.330 sys 0m12.618s 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:22.330 ************************************ 00:31:22.330 END TEST nvmf_zcopy 00:31:22.330 ************************************ 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.330 ************************************ 00:31:22.330 START TEST nvmf_nmic 00:31:22.330 ************************************ 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:22.330 * Looking for test storage... 00:31:22.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:22.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.330 --rc genhtml_branch_coverage=1 00:31:22.330 --rc genhtml_function_coverage=1 00:31:22.330 --rc genhtml_legend=1 00:31:22.330 --rc geninfo_all_blocks=1 00:31:22.330 --rc geninfo_unexecuted_blocks=1 00:31:22.330 00:31:22.330 ' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:22.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.330 --rc genhtml_branch_coverage=1 00:31:22.330 --rc genhtml_function_coverage=1 00:31:22.330 --rc genhtml_legend=1 00:31:22.330 --rc geninfo_all_blocks=1 00:31:22.330 --rc geninfo_unexecuted_blocks=1 00:31:22.330 00:31:22.330 ' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:22.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.330 --rc genhtml_branch_coverage=1 00:31:22.330 --rc genhtml_function_coverage=1 00:31:22.330 --rc genhtml_legend=1 00:31:22.330 --rc geninfo_all_blocks=1 00:31:22.330 --rc geninfo_unexecuted_blocks=1 00:31:22.330 00:31:22.330 ' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:22.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.330 --rc genhtml_branch_coverage=1 00:31:22.330 --rc genhtml_function_coverage=1 00:31:22.330 --rc genhtml_legend=1 00:31:22.330 --rc geninfo_all_blocks=1 00:31:22.330 --rc geninfo_unexecuted_blocks=1 00:31:22.330 00:31:22.330 ' 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.330 17:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:22.330 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.330 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.331 17:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.901 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:28.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:28.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:28.902 Found net devices under 0000:86:00.0: cvl_0_0 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:28.902 Found net devices under 0000:86:00.1: cvl_0_1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:31:28.902 00:31:28.902 --- 10.0.0.2 ping statistics --- 00:31:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.902 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:28.902 00:31:28.902 --- 10.0.0.1 ping statistics --- 00:31:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.902 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.902 17:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2718138 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2718138 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2718138 ']' 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.902 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.902 [2024-11-20 17:25:46.067847] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.902 [2024-11-20 17:25:46.068729] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:31:28.902 [2024-11-20 17:25:46.068762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.902 [2024-11-20 17:25:46.147744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:28.903 [2024-11-20 17:25:46.190994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.903 [2024-11-20 17:25:46.191033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.903 [2024-11-20 17:25:46.191040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.903 [2024-11-20 17:25:46.191046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.903 [2024-11-20 17:25:46.191051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.903 [2024-11-20 17:25:46.192562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.903 [2024-11-20 17:25:46.192670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.903 [2024-11-20 17:25:46.192798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.903 [2024-11-20 17:25:46.192799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.903 [2024-11-20 17:25:46.261276] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.903 [2024-11-20 17:25:46.261879] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.903 [2024-11-20 17:25:46.262217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:28.903 [2024-11-20 17:25:46.262514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.903 [2024-11-20 17:25:46.262566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 [2024-11-20 17:25:46.333544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 Malloc0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 [2024-11-20 17:25:46.425669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:28.903 test case1: single bdev can't be used in multiple subsystems 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 [2024-11-20 17:25:46.457150] bdev.c:8473:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:28.903 [2024-11-20 17:25:46.457170] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:28.903 [2024-11-20 17:25:46.457178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:28.903 request: 00:31:28.903 { 00:31:28.903 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:28.903 "namespace": { 00:31:28.903 "bdev_name": "Malloc0", 00:31:28.903 "no_auto_visible": false, 00:31:28.903 "hide_metadata": false 00:31:28.903 }, 00:31:28.903 "method": "nvmf_subsystem_add_ns", 00:31:28.903 "req_id": 1 00:31:28.903 } 00:31:28.903 Got JSON-RPC error response 00:31:28.903 response: 00:31:28.903 { 00:31:28.903 "code": -32602, 00:31:28.903 "message": "Invalid parameters" 00:31:28.903 } 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:28.903 Adding namespace failed - expected result. 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:28.903 test case2: host connect to nvmf target in multiple paths 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:28.903 [2024-11-20 17:25:46.469244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:28.903 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:29.162 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:29.162 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:29.162 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:29.162 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:29.162 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:31.065 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:31.065 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:31.065 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:31.065 17:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:31.065 17:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:31.065 17:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:31.065 17:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:31.065 [global] 00:31:31.065 thread=1 00:31:31.065 invalidate=1 00:31:31.065 rw=write 00:31:31.065 time_based=1 00:31:31.065 runtime=1 00:31:31.065 ioengine=libaio 00:31:31.065 direct=1 00:31:31.065 bs=4096 00:31:31.065 iodepth=1 00:31:31.065 norandommap=0 00:31:31.065 numjobs=1 00:31:31.065 00:31:31.065 verify_dump=1 00:31:31.065 verify_backlog=512 00:31:31.065 verify_state_save=0 00:31:31.065 do_verify=1 00:31:31.065 verify=crc32c-intel 00:31:31.065 [job0] 00:31:31.065 filename=/dev/nvme0n1 00:31:31.065 Could not set queue depth (nvme0n1) 00:31:31.322 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.322 fio-3.35 00:31:31.322 Starting 1 thread 00:31:32.699 00:31:32.699 job0: (groupid=0, jobs=1): err= 0: pid=2718825: Wed Nov 20 17:25:50 2024 00:31:32.699 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:31:32.699 slat (nsec): min=9258, max=23086, avg=21855.52, stdev=2756.29 00:31:32.699 clat (usec): min=40869, max=41718, avg=40996.62, stdev=159.71 00:31:32.699 lat (usec): min=40891, max=41727, avg=41018.47, stdev=157.00 00:31:32.699 clat percentiles (usec): 00:31:32.699 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:32.699 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:32.699 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:32.699 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:32.699 | 99.99th=[41681] 00:31:32.699 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:31:32.699 slat (nsec): min=8942, max=44740, avg=10892.97, stdev=2216.50 00:31:32.699 clat (usec): min=124, max=315, avg=146.80, stdev=38.48 00:31:32.699 lat (usec): min=135, max=343, avg=157.69, stdev=38.82 00:31:32.699 clat percentiles (usec): 00:31:32.699 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 130], 00:31:32.699 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 133], 00:31:32.699 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 241], 95.00th=[ 243], 00:31:32.699 | 99.00th=[ 247], 99.50th=[ 247], 99.90th=[ 318], 99.95th=[ 318], 00:31:32.699 | 99.99th=[ 318] 00:31:32.699 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:31:32.699 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:32.699 lat (usec) : 250=95.33%, 500=0.37% 00:31:32.699 lat (msec) : 50=4.30% 00:31:32.699 cpu : usr=0.29%, sys=0.49%, ctx=535, majf=0, minf=1 00:31:32.699 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.699 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.699 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:32.699 00:31:32.699 Run status group 0 (all jobs): 00:31:32.699 READ: bw=89.7KiB/s (91.8kB/s), 89.7KiB/s-89.7KiB/s (91.8kB/s-91.8kB/s), io=92.0KiB (94.2kB), run=1026-1026msec 00:31:32.699 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:31:32.699 00:31:32.699 Disk stats (read/write): 00:31:32.699 nvme0n1: ios=68/512, merge=0/0, ticks=803/77, in_queue=880, util=91.28% 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:32.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.699 rmmod nvme_tcp 00:31:32.699 rmmod nvme_fabrics 00:31:32.699 rmmod nvme_keyring 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.699 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2718138 ']' 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2718138 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2718138 ']' 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2718138 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.700 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718138 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718138' 00:31:32.958 killing process with pid 2718138 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2718138 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2718138 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.958 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.495 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.495 00:31:35.495 real 0m13.178s 00:31:35.495 user 0m24.229s 00:31:35.495 sys 0m6.206s 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:35.495 ************************************ 00:31:35.495 END TEST nvmf_nmic 00:31:35.495 ************************************ 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:35.495 ************************************ 00:31:35.495 START TEST nvmf_fio_target 00:31:35.495 ************************************ 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:35.495 * Looking for test storage... 00:31:35.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.495 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:35.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.496 --rc genhtml_branch_coverage=1 00:31:35.496 --rc genhtml_function_coverage=1 00:31:35.496 --rc genhtml_legend=1 00:31:35.496 --rc geninfo_all_blocks=1 00:31:35.496 --rc geninfo_unexecuted_blocks=1 00:31:35.496 00:31:35.496 ' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:35.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.496 --rc genhtml_branch_coverage=1 00:31:35.496 --rc genhtml_function_coverage=1 00:31:35.496 --rc genhtml_legend=1 00:31:35.496 --rc geninfo_all_blocks=1 00:31:35.496 --rc geninfo_unexecuted_blocks=1 00:31:35.496 00:31:35.496 ' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:35.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.496 --rc genhtml_branch_coverage=1 00:31:35.496 --rc genhtml_function_coverage=1 00:31:35.496 --rc genhtml_legend=1 00:31:35.496 --rc geninfo_all_blocks=1 00:31:35.496 --rc geninfo_unexecuted_blocks=1 00:31:35.496 00:31:35.496 ' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:35.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.496 --rc genhtml_branch_coverage=1 00:31:35.496 --rc genhtml_function_coverage=1 00:31:35.496 --rc genhtml_legend=1 00:31:35.496 --rc geninfo_all_blocks=1 00:31:35.496 --rc geninfo_unexecuted_blocks=1 00:31:35.496 00:31:35.496 ' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.496 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.068 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:42.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:42.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:42.069 Found net devices under 0000:86:00.0: cvl_0_0 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:42.069 Found net devices under 0000:86:00.1: cvl_0_1 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.069 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:31:42.069 00:31:42.069 --- 10.0.0.2 ping statistics --- 00:31:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.069 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:31:42.069 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:42.069 00:31:42.069 --- 10.0.0.1 ping statistics --- 00:31:42.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.069 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2722503 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2722503 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2722503 ']' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 [2024-11-20 17:25:59.263232] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:42.070 [2024-11-20 17:25:59.264175] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:31:42.070 [2024-11-20 17:25:59.264225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.070 [2024-11-20 17:25:59.343086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.070 [2024-11-20 17:25:59.386319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.070 [2024-11-20 17:25:59.386356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.070 [2024-11-20 17:25:59.386363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.070 [2024-11-20 17:25:59.386369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.070 [2024-11-20 17:25:59.386375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.070 [2024-11-20 17:25:59.387949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.070 [2024-11-20 17:25:59.388069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.070 [2024-11-20 17:25:59.388178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.070 [2024-11-20 17:25:59.388179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.070 [2024-11-20 17:25:59.457657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:42.070 [2024-11-20 17:25:59.458442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:42.070 [2024-11-20 17:25:59.458644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:42.070 [2024-11-20 17:25:59.458976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:42.070 [2024-11-20 17:25:59.459029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:42.070 [2024-11-20 17:25:59.692947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:42.070 17:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:42.329 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:42.329 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:42.588 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:42.588 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:42.588 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:42.588 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:42.847 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:43.106 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:43.106 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:43.364 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:43.364 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:43.364 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:43.364 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:43.623 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:43.882 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:43.882 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:44.140 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:44.140 17:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:44.140 17:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.398 [2024-11-20 17:26:02.328860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.398 17:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:44.657 17:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:44.915 17:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:45.173 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:47.073 17:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:47.073 [global] 00:31:47.073 thread=1 00:31:47.073 invalidate=1 00:31:47.073 rw=write 00:31:47.073 time_based=1 00:31:47.073 runtime=1 00:31:47.073 ioengine=libaio 00:31:47.073 direct=1 00:31:47.073 bs=4096 00:31:47.073 iodepth=1 00:31:47.073 norandommap=0 00:31:47.073 numjobs=1 00:31:47.073 00:31:47.073 verify_dump=1 00:31:47.073 verify_backlog=512 00:31:47.073 verify_state_save=0 00:31:47.073 do_verify=1 00:31:47.073 verify=crc32c-intel 00:31:47.073 [job0] 00:31:47.073 filename=/dev/nvme0n1 00:31:47.073 [job1] 00:31:47.073 filename=/dev/nvme0n2 00:31:47.073 [job2] 00:31:47.073 filename=/dev/nvme0n3 00:31:47.073 [job3] 00:31:47.073 filename=/dev/nvme0n4 00:31:47.345 Could not set queue depth (nvme0n1) 00:31:47.345 Could not set queue depth (nvme0n2) 00:31:47.345 Could not set queue depth (nvme0n3) 00:31:47.345 Could not set queue depth (nvme0n4) 00:31:47.603 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.603 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.603 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.603 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.603 fio-3.35 00:31:47.603 Starting 4 threads 00:31:48.972 00:31:48.972 job0: (groupid=0, jobs=1): err= 0: pid=2723751: Wed Nov 20 17:26:06 2024 00:31:48.972 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:48.972 slat (nsec): min=6787, max=37092, avg=7818.86, stdev=1141.82 00:31:48.972 clat (usec): min=170, max=457, avg=209.57, stdev=27.17 00:31:48.972 lat (usec): min=178, max=465, avg=217.39, stdev=27.20 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:31:48.972 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:31:48.972 | 70.00th=[ 212], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 251], 00:31:48.972 | 99.00th=[ 260], 99.50th=[ 351], 99.90th=[ 433], 99.95th=[ 441], 00:31:48.972 | 99.99th=[ 457] 00:31:48.972 write: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:31:48.972 slat (nsec): min=9914, max=63681, avg=11140.13, stdev=1785.60 00:31:48.972 clat (usec): min=119, max=517, avg=149.71, stdev=21.54 00:31:48.972 lat (usec): min=133, max=528, avg=160.85, stdev=21.86 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:48.972 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 151], 00:31:48.972 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 184], 95.00th=[ 192], 00:31:48.972 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 249], 99.95th=[ 253], 00:31:48.972 | 99.99th=[ 519] 00:31:48.972 bw ( KiB/s): min=12288, max=12288, per=61.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:48.972 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:48.972 lat (usec) : 250=97.38%, 500=2.60%, 750=0.02% 00:31:48.972 cpu : usr=3.80%, sys=8.60%, ctx=5223, majf=0, minf=1 00:31:48.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 issued rwts: total=2560,2662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.972 job1: (groupid=0, jobs=1): err= 0: pid=2723765: Wed Nov 20 17:26:06 2024 00:31:48.972 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:31:48.972 slat (nsec): min=11508, max=22190, avg=13847.73, stdev=2883.29 00:31:48.972 clat (usec): min=40757, max=41097, avg=40973.49, stdev=75.69 00:31:48.972 lat (usec): min=40768, max=41110, avg=40987.34, stdev=75.62 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:48.972 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:48.972 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:48.972 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.972 | 99.99th=[41157] 00:31:48.972 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:31:48.972 slat (nsec): min=10098, max=48302, avg=13134.28, stdev=3803.03 00:31:48.972 clat (usec): min=139, max=353, avg=210.78, stdev=29.23 00:31:48.972 lat (usec): min=152, max=365, avg=223.91, stdev=29.49 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 184], 00:31:48.972 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 225], 00:31:48.972 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 253], 00:31:48.972 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 355], 99.95th=[ 355], 00:31:48.972 | 99.99th=[ 355] 00:31:48.972 bw ( KiB/s): min= 4096, max= 4096, per=20.33%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.972 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.972 lat (usec) : 250=89.89%, 500=5.99% 00:31:48.972 lat (msec) : 50=4.12% 00:31:48.972 cpu : usr=0.29%, sys=0.59%, ctx=537, majf=0, minf=1 00:31:48.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.972 job2: (groupid=0, jobs=1): err= 0: pid=2723782: Wed Nov 20 17:26:06 2024 00:31:48.972 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:31:48.972 slat (nsec): min=10660, max=27753, avg=23484.09, stdev=3081.06 00:31:48.972 clat (usec): min=40840, max=42024, avg=41016.21, stdev=230.48 00:31:48.972 lat (usec): min=40863, max=42035, avg=41039.69, stdev=227.67 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:48.972 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:48.972 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:48.972 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:48.972 | 99.99th=[42206] 00:31:48.972 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:31:48.972 slat (nsec): min=10940, max=37907, avg=12582.38, stdev=2171.72 00:31:48.972 clat (usec): min=146, max=470, avg=184.13, stdev=22.95 00:31:48.972 lat (usec): min=157, max=481, avg=196.72, stdev=23.31 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:31:48.972 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:31:48.972 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:31:48.972 | 99.00th=[ 251], 99.50th=[ 306], 99.90th=[ 469], 99.95th=[ 469], 00:31:48.972 | 99.99th=[ 469] 00:31:48.972 bw ( KiB/s): min= 4104, max= 4104, per=20.37%, avg=4104.00, stdev= 0.00, samples=1 00:31:48.972 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:31:48.972 lat (usec) : 250=94.76%, 500=1.12% 00:31:48.972 lat (msec) : 50=4.12% 00:31:48.972 cpu : usr=0.40%, sys=0.99%, ctx=536, majf=0, minf=1 00:31:48.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.972 job3: (groupid=0, jobs=1): err= 0: pid=2723787: Wed Nov 20 17:26:06 2024 00:31:48.972 read: IOPS=998, BW=3992KiB/s (4088kB/s)(4140KiB/1037msec) 00:31:48.972 slat (nsec): min=6945, max=31581, avg=8239.94, stdev=2120.38 00:31:48.972 clat (usec): min=220, max=41405, avg=686.85, stdev=4181.73 00:31:48.972 lat (usec): min=228, max=41415, avg=695.09, stdev=4183.24 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:31:48.972 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:31:48.972 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 297], 00:31:48.972 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.972 | 99.99th=[41157] 00:31:48.972 write: IOPS=1481, BW=5925KiB/s (6067kB/s)(6144KiB/1037msec); 0 zone resets 00:31:48.972 slat (nsec): min=10052, max=42761, avg=12190.94, stdev=2899.85 00:31:48.972 clat (usec): min=139, max=372, avg=187.60, stdev=31.52 00:31:48.972 lat (usec): min=150, max=405, avg=199.79, stdev=32.72 00:31:48.972 clat percentiles (usec): 00:31:48.972 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:31:48.972 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:31:48.972 | 70.00th=[ 194], 80.00th=[ 212], 90.00th=[ 237], 95.00th=[ 243], 00:31:48.972 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 371], 00:31:48.972 | 99.99th=[ 371] 00:31:48.972 bw ( KiB/s): min= 3416, max= 8872, per=30.50%, avg=6144.00, stdev=3857.97, samples=2 00:31:48.972 iops : min= 854, max= 2218, avg=1536.00, stdev=964.49, samples=2 00:31:48.972 lat (usec) : 250=83.16%, 500=16.30%, 750=0.12% 00:31:48.972 lat (msec) : 50=0.43% 00:31:48.972 cpu : usr=1.54%, sys=2.70%, ctx=2573, majf=0, minf=1 00:31:48.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.972 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.972 00:31:48.972 Run status group 0 (all jobs): 00:31:48.972 READ: bw=13.7MiB/s (14.4MB/s), 86.1KiB/s-9.99MiB/s (88.2kB/s-10.5MB/s), io=14.2MiB (14.9MB), run=1001-1037msec 00:31:48.972 WRITE: bw=19.7MiB/s (20.6MB/s), 2004KiB/s-10.4MiB/s (2052kB/s-10.9MB/s), io=20.4MiB (21.4MB), run=1001-1037msec 00:31:48.972 00:31:48.972 Disk stats (read/write): 00:31:48.972 nvme0n1: ios=2098/2537, merge=0/0, ticks=401/361, in_queue=762, util=86.47% 00:31:48.972 nvme0n2: ios=66/512, merge=0/0, ticks=1579/106, in_queue=1685, util=89.01% 00:31:48.972 nvme0n3: ios=81/512, merge=0/0, ticks=1017/84, in_queue=1101, util=94.15% 00:31:48.972 nvme0n4: ios=1053/1536, merge=0/0, ticks=1415/286, in_queue=1701, util=93.68% 00:31:48.972 17:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:48.972 [global] 00:31:48.972 thread=1 00:31:48.972 invalidate=1 00:31:48.972 rw=randwrite 00:31:48.972 time_based=1 00:31:48.972 runtime=1 00:31:48.972 ioengine=libaio 00:31:48.972 direct=1 00:31:48.972 bs=4096 00:31:48.972 iodepth=1 00:31:48.972 norandommap=0 00:31:48.972 numjobs=1 00:31:48.972 00:31:48.972 verify_dump=1 00:31:48.973 verify_backlog=512 00:31:48.973 verify_state_save=0 00:31:48.973 do_verify=1 00:31:48.973 verify=crc32c-intel 00:31:48.973 [job0] 00:31:48.973 filename=/dev/nvme0n1 00:31:48.973 [job1] 00:31:48.973 filename=/dev/nvme0n2 00:31:48.973 [job2] 00:31:48.973 filename=/dev/nvme0n3 00:31:48.973 [job3] 00:31:48.973 filename=/dev/nvme0n4 00:31:48.973 Could not set queue depth (nvme0n1) 00:31:48.973 Could not set queue depth (nvme0n2) 00:31:48.973 Could not set queue depth (nvme0n3) 00:31:48.973 Could not set queue depth (nvme0n4) 00:31:48.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:48.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:48.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:48.973 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:48.973 fio-3.35 00:31:48.973 Starting 4 threads 00:31:50.343 00:31:50.343 job0: (groupid=0, jobs=1): err= 0: pid=2724176: Wed Nov 20 17:26:08 2024 00:31:50.343 read: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec) 00:31:50.343 slat (nsec): min=2112, max=38719, avg=6422.81, stdev=2850.06 00:31:50.343 clat (usec): min=163, max=618, avg=316.49, stdev=94.68 00:31:50.343 lat (usec): min=166, max=620, avg=322.91, stdev=95.68 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 174], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 237], 00:31:50.343 | 30.00th=[ 253], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 314], 00:31:50.343 | 70.00th=[ 347], 80.00th=[ 412], 90.00th=[ 469], 95.00th=[ 502], 00:31:50.343 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 611], 99.95th=[ 619], 00:31:50.343 | 99.99th=[ 619] 00:31:50.343 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:50.343 slat (nsec): min=3342, max=46156, avg=10259.52, stdev=5460.19 00:31:50.343 clat (usec): min=115, max=428, avg=172.81, stdev=33.67 00:31:50.343 lat (usec): min=125, max=459, avg=183.07, stdev=34.90 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 125], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 147], 00:31:50.343 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:31:50.343 | 70.00th=[ 180], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 237], 00:31:50.343 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 400], 99.95th=[ 408], 00:31:50.343 | 99.99th=[ 429] 00:31:50.343 bw ( KiB/s): min= 8480, max= 8480, per=27.08%, avg=8480.00, stdev= 0.00, samples=1 00:31:50.343 iops : min= 2120, max= 2120, avg=2120.00, stdev= 0.00, samples=1 00:31:50.343 lat (usec) : 250=65.02%, 500=32.38%, 750=2.60% 00:31:50.343 cpu : usr=2.10%, sys=3.30%, ctx=3960, majf=0, minf=1 00:31:50.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 issued rwts: total=1911,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.343 job1: (groupid=0, jobs=1): err= 0: pid=2724189: Wed Nov 20 17:26:08 2024 00:31:50.343 read: IOPS=2000, BW=8004KiB/s (8196kB/s)(8012KiB/1001msec) 00:31:50.343 slat (nsec): min=2015, max=24331, avg=4099.49, stdev=2513.94 00:31:50.343 clat (usec): min=157, max=41320, avg=306.45, stdev=921.01 00:31:50.343 lat (usec): min=160, max=41324, avg=310.55, stdev=921.13 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 208], 20.00th=[ 221], 00:31:50.343 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 262], 60.00th=[ 277], 00:31:50.343 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 461], 95.00th=[ 498], 00:31:50.343 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 553], 00:31:50.343 | 99.99th=[41157] 00:31:50.343 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:50.343 slat (nsec): min=2887, max=73006, avg=4140.32, stdev=2446.69 00:31:50.343 clat (usec): min=105, max=1661, avg=177.76, stdev=45.64 00:31:50.343 lat (usec): min=108, max=1665, avg=181.90, stdev=45.78 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 119], 5.00th=[ 130], 10.00th=[ 137], 20.00th=[ 149], 00:31:50.343 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:31:50.343 | 70.00th=[ 192], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 231], 00:31:50.343 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 314], 00:31:50.343 | 99.99th=[ 1663] 00:31:50.343 bw ( KiB/s): min= 9568, max= 9568, per=30.55%, avg=9568.00, stdev= 0.00, samples=1 00:31:50.343 iops : min= 2392, max= 2392, avg=2392.00, stdev= 0.00, samples=1 00:31:50.343 lat (usec) : 250=71.14%, 500=26.88%, 750=1.93% 00:31:50.343 lat (msec) : 2=0.02%, 50=0.02% 00:31:50.343 cpu : usr=1.80%, sys=2.20%, ctx=4052, majf=0, minf=1 00:31:50.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 issued rwts: total=2003,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.343 job2: (groupid=0, jobs=1): err= 0: pid=2724204: Wed Nov 20 17:26:08 2024 00:31:50.343 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:50.343 slat (nsec): min=7149, max=23918, avg=8577.09, stdev=1336.25 00:31:50.343 clat (usec): min=188, max=615, avg=364.71, stdev=109.27 00:31:50.343 lat (usec): min=196, max=624, avg=373.29, stdev=109.19 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 194], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 247], 00:31:50.343 | 30.00th=[ 281], 40.00th=[ 314], 50.00th=[ 347], 60.00th=[ 416], 00:31:50.343 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 506], 95.00th=[ 523], 00:31:50.343 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 578], 99.95th=[ 619], 00:31:50.343 | 99.99th=[ 619] 00:31:50.343 write: IOPS=1691, BW=6765KiB/s (6928kB/s)(6772KiB/1001msec); 0 zone resets 00:31:50.343 slat (nsec): min=8442, max=55928, avg=11250.68, stdev=1853.90 00:31:50.343 clat (usec): min=139, max=392, avg=235.10, stdev=56.62 00:31:50.343 lat (usec): min=152, max=424, avg=246.35, stdev=56.66 00:31:50.343 clat percentiles (usec): 00:31:50.343 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 00:31:50.343 | 30.00th=[ 192], 40.00th=[ 210], 50.00th=[ 229], 60.00th=[ 253], 00:31:50.343 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 334], 00:31:50.343 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 388], 99.95th=[ 392], 00:31:50.343 | 99.99th=[ 392] 00:31:50.343 bw ( KiB/s): min= 8192, max= 8192, per=26.16%, avg=8192.00, stdev= 0.00, samples=1 00:31:50.343 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:50.343 lat (usec) : 250=40.79%, 500=52.25%, 750=6.97% 00:31:50.343 cpu : usr=1.90%, sys=5.60%, ctx=3229, majf=0, minf=1 00:31:50.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.343 issued rwts: total=1536,1693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.343 job3: (groupid=0, jobs=1): err= 0: pid=2724209: Wed Nov 20 17:26:08 2024 00:31:50.343 read: IOPS=1745, BW=6981KiB/s (7149kB/s)(6988KiB/1001msec) 00:31:50.343 slat (nsec): min=6702, max=27772, avg=7612.88, stdev=1189.21 00:31:50.344 clat (usec): min=198, max=41379, avg=350.05, stdev=2124.76 00:31:50.344 lat (usec): min=205, max=41392, avg=357.66, stdev=2125.26 00:31:50.344 clat percentiles (usec): 00:31:50.344 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 219], 00:31:50.344 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 227], 00:31:50.344 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 237], 95.00th=[ 245], 00:31:50.344 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[41157], 99.95th=[41157], 00:31:50.344 | 99.99th=[41157] 00:31:50.344 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:50.344 slat (nsec): min=9311, max=44388, avg=10522.17, stdev=1604.21 00:31:50.344 clat (usec): min=134, max=366, avg=168.76, stdev=17.95 00:31:50.344 lat (usec): min=144, max=403, avg=179.28, stdev=18.35 00:31:50.344 clat percentiles (usec): 00:31:50.344 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:31:50.344 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:31:50.344 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 200], 00:31:50.344 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 355], 99.95th=[ 359], 00:31:50.344 | 99.99th=[ 367] 00:31:50.344 bw ( KiB/s): min= 6680, max= 6680, per=21.33%, avg=6680.00, stdev= 0.00, samples=1 00:31:50.344 iops : min= 1670, max= 1670, avg=1670.00, stdev= 0.00, samples=1 00:31:50.344 lat (usec) : 250=97.92%, 500=1.92% 00:31:50.344 lat (msec) : 20=0.03%, 50=0.13% 00:31:50.344 cpu : usr=1.60%, sys=3.80%, ctx=3796, majf=0, minf=1 00:31:50.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.344 issued rwts: total=1747,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:50.344 00:31:50.344 Run status group 0 (all jobs): 00:31:50.344 READ: bw=28.1MiB/s (29.4MB/s), 6138KiB/s-8004KiB/s (6285kB/s-8196kB/s), io=28.1MiB (29.5MB), run=1001-1001msec 00:31:50.344 WRITE: bw=30.6MiB/s (32.1MB/s), 6765KiB/s-8184KiB/s (6928kB/s-8380kB/s), io=30.6MiB (32.1MB), run=1001-1001msec 00:31:50.344 00:31:50.344 Disk stats (read/write): 00:31:50.344 nvme0n1: ios=1560/1916, merge=0/0, ticks=699/315, in_queue=1014, util=100.00% 00:31:50.344 nvme0n2: ios=1692/2048, merge=0/0, ticks=484/347, in_queue=831, util=88.12% 00:31:50.344 nvme0n3: ios=1329/1536, merge=0/0, ticks=493/357, in_queue=850, util=90.84% 00:31:50.344 nvme0n4: ios=1559/1536, merge=0/0, ticks=1094/253, in_queue=1347, util=98.22% 00:31:50.344 17:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:50.344 [global] 00:31:50.344 thread=1 00:31:50.344 invalidate=1 00:31:50.344 rw=write 00:31:50.344 time_based=1 00:31:50.344 runtime=1 00:31:50.344 ioengine=libaio 00:31:50.344 direct=1 00:31:50.344 bs=4096 00:31:50.344 iodepth=128 00:31:50.344 norandommap=0 00:31:50.344 numjobs=1 00:31:50.344 00:31:50.344 verify_dump=1 00:31:50.344 verify_backlog=512 00:31:50.344 verify_state_save=0 00:31:50.344 do_verify=1 00:31:50.344 verify=crc32c-intel 00:31:50.344 [job0] 00:31:50.344 filename=/dev/nvme0n1 00:31:50.344 [job1] 00:31:50.344 filename=/dev/nvme0n2 00:31:50.344 [job2] 00:31:50.344 filename=/dev/nvme0n3 00:31:50.344 [job3] 00:31:50.344 filename=/dev/nvme0n4 00:31:50.344 Could not set queue depth (nvme0n1) 00:31:50.344 Could not set queue depth (nvme0n2) 00:31:50.344 Could not set queue depth (nvme0n3) 00:31:50.344 Could not set queue depth (nvme0n4) 00:31:50.601 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.601 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.601 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.601 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.601 fio-3.35 00:31:50.601 Starting 4 threads 00:31:51.971 00:31:51.971 job0: (groupid=0, jobs=1): err= 0: pid=2724580: Wed Nov 20 17:26:09 2024 00:31:51.971 read: IOPS=5128, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1003msec) 00:31:51.971 slat (nsec): min=1097, max=13399k, avg=75964.18, stdev=526066.73 00:31:51.971 clat (usec): min=1948, max=35907, avg=10859.78, stdev=3934.73 00:31:51.971 lat (usec): min=1956, max=35909, avg=10935.75, stdev=3964.89 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 2311], 5.00th=[ 5014], 10.00th=[ 6456], 20.00th=[ 8848], 00:31:51.971 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:31:51.971 | 70.00th=[11469], 80.00th=[12125], 90.00th=[15008], 95.00th=[16057], 00:31:51.971 | 99.00th=[31589], 99.50th=[33162], 99.90th=[34866], 99.95th=[35914], 00:31:51.971 | 99.99th=[35914] 00:31:51.971 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:31:51.971 slat (nsec): min=1832, max=11246k, avg=95198.46, stdev=581417.03 00:31:51.971 clat (usec): min=365, max=48049, avg=12542.30, stdev=7813.89 00:31:51.971 lat (usec): min=575, max=48061, avg=12637.50, stdev=7871.58 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8291], 00:31:51.971 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:31:51.971 | 70.00th=[10814], 80.00th=[12780], 90.00th=[21627], 95.00th=[34341], 00:31:51.971 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:31:51.971 | 99.99th=[47973] 00:31:51.971 bw ( KiB/s): min=21040, max=23192, per=33.13%, avg=22116.00, stdev=1521.69, samples=2 00:31:51.971 iops : min= 5260, max= 5798, avg=5529.00, stdev=380.42, samples=2 00:31:51.971 lat (usec) : 500=0.01%, 750=0.16%, 1000=0.06% 00:31:51.971 lat (msec) : 2=0.07%, 4=1.08%, 10=39.75%, 20=50.58%, 50=8.30% 00:31:51.971 cpu : usr=3.79%, sys=5.39%, ctx=503, majf=0, minf=1 00:31:51.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:51.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.971 issued rwts: total=5144,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.971 job1: (groupid=0, jobs=1): err= 0: pid=2724581: Wed Nov 20 17:26:09 2024 00:31:51.971 read: IOPS=4098, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1007msec) 00:31:51.971 slat (nsec): min=1054, max=17565k, avg=108480.86, stdev=812242.52 00:31:51.971 clat (usec): min=1797, max=52312, avg=14220.23, stdev=9679.15 00:31:51.971 lat (usec): min=1817, max=52319, avg=14328.71, stdev=9738.15 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 2073], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 8094], 00:31:51.971 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[10683], 60.00th=[12387], 00:31:51.971 | 70.00th=[15139], 80.00th=[18482], 90.00th=[29230], 95.00th=[38536], 00:31:51.971 | 99.00th=[46400], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:31:51.971 | 99.99th=[52167] 00:31:51.971 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:31:51.971 slat (nsec): min=1848, max=23055k, avg=113807.27, stdev=927384.49 00:31:51.971 clat (usec): min=3418, max=60808, avg=14831.58, stdev=9041.71 00:31:51.971 lat (usec): min=3426, max=60838, avg=14945.38, stdev=9137.86 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 5080], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8094], 00:31:51.971 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[10552], 60.00th=[13960], 00:31:51.971 | 70.00th=[16712], 80.00th=[21890], 90.00th=[28443], 95.00th=[33162], 00:31:51.971 | 99.00th=[44303], 99.50th=[44303], 99.90th=[48497], 99.95th=[51119], 00:31:51.971 | 99.99th=[60556] 00:31:51.971 bw ( KiB/s): min=15112, max=20976, per=27.03%, avg=18044.00, stdev=4146.47, samples=2 00:31:51.971 iops : min= 3778, max= 5244, avg=4511.00, stdev=1036.62, samples=2 00:31:51.971 lat (msec) : 2=0.07%, 4=1.34%, 10=44.87%, 20=33.25%, 50=20.05% 00:31:51.971 lat (msec) : 100=0.44% 00:31:51.971 cpu : usr=2.88%, sys=4.08%, ctx=374, majf=0, minf=1 00:31:51.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:51.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.971 issued rwts: total=4127,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.971 job2: (groupid=0, jobs=1): err= 0: pid=2724582: Wed Nov 20 17:26:09 2024 00:31:51.971 read: IOPS=3683, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec) 00:31:51.971 slat (nsec): min=1131, max=17367k, avg=132542.04, stdev=899160.35 00:31:51.971 clat (usec): min=2105, max=57511, avg=17124.57, stdev=7317.17 00:31:51.971 lat (usec): min=3530, max=57516, avg=17257.11, stdev=7379.10 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 5145], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[12125], 00:31:51.971 | 30.00th=[13698], 40.00th=[15401], 50.00th=[15795], 60.00th=[16909], 00:31:51.971 | 70.00th=[19006], 80.00th=[20055], 90.00th=[22676], 95.00th=[27657], 00:31:51.971 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[51643], 00:31:51.971 | 99.99th=[57410] 00:31:51.971 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:31:51.971 slat (nsec): min=1912, max=12031k, avg=109999.62, stdev=699991.62 00:31:51.971 clat (usec): min=1841, max=44796, avg=15593.81, stdev=7715.29 00:31:51.971 lat (usec): min=1853, max=44804, avg=15703.81, stdev=7781.81 00:31:51.971 clat percentiles (usec): 00:31:51.971 | 1.00th=[ 3359], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[10159], 00:31:51.971 | 30.00th=[10683], 40.00th=[11863], 50.00th=[13042], 60.00th=[15401], 00:31:51.971 | 70.00th=[18220], 80.00th=[20579], 90.00th=[24773], 95.00th=[32637], 00:31:51.971 | 99.00th=[41157], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:31:51.971 | 99.99th=[44827] 00:31:51.971 bw ( KiB/s): min=14512, max=18232, per=24.52%, avg=16372.00, stdev=2630.44, samples=2 00:31:51.971 iops : min= 3628, max= 4558, avg=4093.00, stdev=657.61, samples=2 00:31:51.971 lat (msec) : 2=0.10%, 4=0.72%, 10=11.98%, 20=66.06%, 50=20.94% 00:31:51.971 lat (msec) : 100=0.20% 00:31:51.971 cpu : usr=2.98%, sys=4.87%, ctx=332, majf=0, minf=2 00:31:51.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:51.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.971 issued rwts: total=3709,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.972 job3: (groupid=0, jobs=1): err= 0: pid=2724583: Wed Nov 20 17:26:09 2024 00:31:51.972 read: IOPS=2630, BW=10.3MiB/s (10.8MB/s)(10.7MiB/1043msec) 00:31:51.972 slat (nsec): min=1210, max=16376k, avg=153823.54, stdev=966319.68 00:31:51.972 clat (usec): min=3947, max=58699, avg=22416.87, stdev=12337.67 00:31:51.972 lat (usec): min=3951, max=65740, avg=22570.69, stdev=12357.53 00:31:51.972 clat percentiles (usec): 00:31:51.972 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11338], 20.00th=[11994], 00:31:51.972 | 30.00th=[12518], 40.00th=[14877], 50.00th=[18744], 60.00th=[23725], 00:31:51.972 | 70.00th=[26084], 80.00th=[30278], 90.00th=[43779], 95.00th=[50594], 00:31:51.972 | 99.00th=[55313], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:31:51.972 | 99.99th=[58459] 00:31:51.972 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:31:51.972 slat (usec): min=2, max=11343, avg=182.47, stdev=907.28 00:31:51.972 clat (msec): min=6, max=105, avg=23.03, stdev=18.75 00:31:51.972 lat (msec): min=7, max=105, avg=23.21, stdev=18.88 00:31:51.972 clat percentiles (msec): 00:31:51.972 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:31:51.972 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 20], 00:31:51.972 | 70.00th=[ 25], 80.00th=[ 34], 90.00th=[ 45], 95.00th=[ 61], 00:31:51.972 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 106], 00:31:51.972 | 99.99th=[ 106] 00:31:51.972 bw ( KiB/s): min=12224, max=12352, per=18.41%, avg=12288.00, stdev=90.51, samples=2 00:31:51.972 iops : min= 3056, max= 3088, avg=3072.00, stdev=22.63, samples=2 00:31:51.972 lat (msec) : 4=0.10%, 10=5.38%, 20=52.10%, 50=36.61%, 100=5.31% 00:31:51.972 lat (msec) : 250=0.50% 00:31:51.972 cpu : usr=2.69%, sys=3.36%, ctx=342, majf=0, minf=1 00:31:51.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:51.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.972 issued rwts: total=2744,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.972 00:31:51.972 Run status group 0 (all jobs): 00:31:51.972 READ: bw=58.9MiB/s (61.8MB/s), 10.3MiB/s-20.0MiB/s (10.8MB/s-21.0MB/s), io=61.4MiB (64.4MB), run=1003-1043msec 00:31:51.972 WRITE: bw=65.2MiB/s (68.4MB/s), 11.5MiB/s-21.9MiB/s (12.1MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1003-1043msec 00:31:51.972 00:31:51.972 Disk stats (read/write): 00:31:51.972 nvme0n1: ios=4154/4608, merge=0/0, ticks=26207/38113, in_queue=64320, util=85.77% 00:31:51.972 nvme0n2: ios=3768/4096, merge=0/0, ticks=22449/25356, in_queue=47805, util=88.32% 00:31:51.972 nvme0n3: ios=3129/3486, merge=0/0, ticks=29820/30584, in_queue=60404, util=93.56% 00:31:51.972 nvme0n4: ios=2370/2560, merge=0/0, ticks=13879/19616, in_queue=33495, util=93.09% 00:31:51.972 17:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:51.972 [global] 00:31:51.972 thread=1 00:31:51.972 invalidate=1 00:31:51.972 rw=randwrite 00:31:51.972 time_based=1 00:31:51.972 runtime=1 00:31:51.972 ioengine=libaio 00:31:51.972 direct=1 00:31:51.972 bs=4096 00:31:51.972 iodepth=128 00:31:51.972 norandommap=0 00:31:51.972 numjobs=1 00:31:51.972 00:31:51.972 verify_dump=1 00:31:51.972 verify_backlog=512 00:31:51.972 verify_state_save=0 00:31:51.972 do_verify=1 00:31:51.972 verify=crc32c-intel 00:31:51.972 [job0] 00:31:51.972 filename=/dev/nvme0n1 00:31:51.972 [job1] 00:31:51.972 filename=/dev/nvme0n2 00:31:51.972 [job2] 00:31:51.972 filename=/dev/nvme0n3 00:31:51.972 [job3] 00:31:51.972 filename=/dev/nvme0n4 00:31:51.972 Could not set queue depth (nvme0n1) 00:31:51.972 Could not set queue depth (nvme0n2) 00:31:51.972 Could not set queue depth (nvme0n3) 00:31:51.972 Could not set queue depth (nvme0n4) 00:31:52.229 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:52.229 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:52.229 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:52.229 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:52.229 fio-3.35 00:31:52.229 Starting 4 threads 00:31:53.599 00:31:53.599 job0: (groupid=0, jobs=1): err= 0: pid=2724954: Wed Nov 20 17:26:11 2024 00:31:53.599 read: IOPS=4013, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1009msec) 00:31:53.599 slat (nsec): min=1093, max=17940k, avg=119215.47, stdev=992513.60 00:31:53.599 clat (usec): min=6123, max=72815, avg=15194.42, stdev=9845.69 00:31:53.599 lat (usec): min=6130, max=72821, avg=15313.63, stdev=9935.91 00:31:53.599 clat percentiles (usec): 00:31:53.599 | 1.00th=[ 6194], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8455], 00:31:53.599 | 30.00th=[ 9372], 40.00th=[11731], 50.00th=[13698], 60.00th=[15139], 00:31:53.599 | 70.00th=[16450], 80.00th=[18482], 90.00th=[21103], 95.00th=[31589], 00:31:53.599 | 99.00th=[62129], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:31:53.600 | 99.99th=[72877] 00:31:53.600 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:31:53.600 slat (usec): min=2, max=10469, avg=113.77, stdev=763.56 00:31:53.600 clat (usec): min=856, max=90779, avg=16223.69, stdev=16543.34 00:31:53.600 lat (usec): min=883, max=90788, avg=16337.46, stdev=16647.38 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 3949], 5.00th=[ 6325], 10.00th=[ 7439], 20.00th=[ 7832], 00:31:53.600 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11600], 00:31:53.600 | 70.00th=[13698], 80.00th=[15270], 90.00th=[39584], 95.00th=[55837], 00:31:53.600 | 99.00th=[86508], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:31:53.600 | 99.99th=[90702] 00:31:53.600 bw ( KiB/s): min=14160, max=18608, per=26.78%, avg=16384.00, stdev=3145.21, samples=2 00:31:53.600 iops : min= 3540, max= 4652, avg=4096.00, stdev=786.30, samples=2 00:31:53.600 lat (usec) : 1000=0.02% 00:31:53.600 lat (msec) : 4=1.03%, 10=39.34%, 20=44.62%, 50=10.91%, 100=4.06% 00:31:53.600 cpu : usr=3.57%, sys=3.97%, ctx=225, majf=0, minf=1 00:31:53.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.600 issued rwts: total=4050,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.600 job1: (groupid=0, jobs=1): err= 0: pid=2724957: Wed Nov 20 17:26:11 2024 00:31:53.600 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:31:53.600 slat (nsec): min=1302, max=21951k, avg=92300.87, stdev=865144.56 00:31:53.600 clat (usec): min=392, max=36632, avg=12895.74, stdev=5939.25 00:31:53.600 lat (usec): min=400, max=47149, avg=12988.04, stdev=6003.16 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 2147], 5.00th=[ 3949], 10.00th=[ 5800], 20.00th=[ 8160], 00:31:53.600 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11994], 60.00th=[13173], 00:31:53.600 | 70.00th=[15008], 80.00th=[17433], 90.00th=[20841], 95.00th=[23987], 00:31:53.600 | 99.00th=[31065], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:31:53.600 | 99.99th=[36439] 00:31:53.600 write: IOPS=5264, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1008msec); 0 zone resets 00:31:53.600 slat (usec): min=2, max=10435, avg=85.93, stdev=601.06 00:31:53.600 clat (usec): min=463, max=55579, avg=11660.23, stdev=9634.96 00:31:53.600 lat (usec): min=470, max=55588, avg=11746.16, stdev=9708.03 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 750], 5.00th=[ 4555], 10.00th=[ 5473], 20.00th=[ 7701], 00:31:53.600 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9634], 00:31:53.600 | 70.00th=[10290], 80.00th=[11994], 90.00th=[17695], 95.00th=[37487], 00:31:53.600 | 99.00th=[52167], 99.50th=[54789], 99.90th=[55313], 99.95th=[55837], 00:31:53.600 | 99.99th=[55837] 00:31:53.600 bw ( KiB/s): min=20480, max=21040, per=33.93%, avg=20760.00, stdev=395.98, samples=2 00:31:53.600 iops : min= 5120, max= 5260, avg=5190.00, stdev=98.99, samples=2 00:31:53.600 lat (usec) : 500=0.04%, 750=0.51%, 1000=0.80% 00:31:53.600 lat (msec) : 2=0.61%, 4=2.44%, 10=43.79%, 20=40.28%, 50=10.52% 00:31:53.600 lat (msec) : 100=1.02% 00:31:53.600 cpu : usr=3.18%, sys=6.26%, ctx=335, majf=0, minf=2 00:31:53.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.600 issued rwts: total=5120,5307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.600 job2: (groupid=0, jobs=1): err= 0: pid=2724958: Wed Nov 20 17:26:11 2024 00:31:53.600 read: IOPS=2354, BW=9418KiB/s (9644kB/s)(9832KiB/1044msec) 00:31:53.600 slat (nsec): min=1968, max=24009k, avg=216836.43, stdev=1390783.36 00:31:53.600 clat (usec): min=8244, max=64373, avg=29138.59, stdev=13870.01 00:31:53.600 lat (usec): min=8250, max=66844, avg=29355.43, stdev=13986.69 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[10945], 5.00th=[12780], 10.00th=[14484], 20.00th=[15795], 00:31:53.600 | 30.00th=[17433], 40.00th=[21365], 50.00th=[23200], 60.00th=[32900], 00:31:53.600 | 70.00th=[39584], 80.00th=[44303], 90.00th=[49546], 95.00th=[52167], 00:31:53.600 | 99.00th=[57410], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:31:53.600 | 99.99th=[64226] 00:31:53.600 write: IOPS=2452, BW=9808KiB/s (10.0MB/s)(10.0MiB/1044msec); 0 zone resets 00:31:53.600 slat (usec): min=2, max=18261, avg=174.72, stdev=1078.31 00:31:53.600 clat (usec): min=8236, max=57059, avg=23521.31, stdev=10505.84 00:31:53.600 lat (usec): min=8245, max=57091, avg=23696.04, stdev=10603.98 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 8356], 5.00th=[11600], 10.00th=[11863], 20.00th=[13829], 00:31:53.600 | 30.00th=[15139], 40.00th=[17695], 50.00th=[20317], 60.00th=[24773], 00:31:53.600 | 70.00th=[30802], 80.00th=[34341], 90.00th=[39584], 95.00th=[42730], 00:31:53.600 | 99.00th=[43779], 99.50th=[43779], 99.90th=[53740], 99.95th=[56886], 00:31:53.600 | 99.99th=[56886] 00:31:53.600 bw ( KiB/s): min= 8192, max=12288, per=16.74%, avg=10240.00, stdev=2896.31, samples=2 00:31:53.600 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:31:53.600 lat (msec) : 10=1.28%, 20=39.62%, 50=54.50%, 100=4.60% 00:31:53.600 cpu : usr=1.53%, sys=4.51%, ctx=201, majf=0, minf=1 00:31:53.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.600 issued rwts: total=2458,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.600 job3: (groupid=0, jobs=1): err= 0: pid=2724959: Wed Nov 20 17:26:11 2024 00:31:53.600 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:31:53.600 slat (nsec): min=1052, max=26102k, avg=126907.67, stdev=1019922.19 00:31:53.600 clat (usec): min=6410, max=74947, avg=17735.39, stdev=16065.96 00:31:53.600 lat (usec): min=7116, max=74952, avg=17862.30, stdev=16158.41 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:31:53.600 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11076], 60.00th=[12125], 00:31:53.600 | 70.00th=[13435], 80.00th=[14877], 90.00th=[51643], 95.00th=[61604], 00:31:53.600 | 99.00th=[69731], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:31:53.600 | 99.99th=[74974] 00:31:53.600 write: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1003msec); 0 zone resets 00:31:53.600 slat (nsec): min=1674, max=18053k, avg=132100.28, stdev=803175.53 00:31:53.600 clat (usec): min=290, max=60006, avg=15486.86, stdev=10672.92 00:31:53.600 lat (usec): min=3143, max=60013, avg=15618.96, stdev=10737.24 00:31:53.600 clat percentiles (usec): 00:31:53.600 | 1.00th=[ 5473], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 9503], 00:31:53.600 | 30.00th=[10290], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:31:53.600 | 70.00th=[12649], 80.00th=[17171], 90.00th=[35914], 95.00th=[41681], 00:31:53.600 | 99.00th=[51643], 99.50th=[55313], 99.90th=[60031], 99.95th=[60031], 00:31:53.600 | 99.99th=[60031] 00:31:53.600 bw ( KiB/s): min=14712, max=16320, per=25.36%, avg=15516.00, stdev=1137.03, samples=2 00:31:53.600 iops : min= 3678, max= 4080, avg=3879.00, stdev=284.26, samples=2 00:31:53.600 lat (usec) : 500=0.01% 00:31:53.600 lat (msec) : 4=0.43%, 10=28.09%, 20=55.05%, 50=10.84%, 100=5.57% 00:31:53.600 cpu : usr=2.20%, sys=2.99%, ctx=437, majf=0, minf=1 00:31:53.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.600 issued rwts: total=3584,4007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.600 00:31:53.600 Run status group 0 (all jobs): 00:31:53.600 READ: bw=56.9MiB/s (59.7MB/s), 9418KiB/s-19.8MiB/s (9644kB/s-20.8MB/s), io=59.4MiB (62.3MB), run=1003-1044msec 00:31:53.600 WRITE: bw=59.8MiB/s (62.7MB/s), 9808KiB/s-20.6MiB/s (10.0MB/s-21.6MB/s), io=62.4MiB (65.4MB), run=1003-1044msec 00:31:53.600 00:31:53.600 Disk stats (read/write): 00:31:53.600 nvme0n1: ios=3109/3152, merge=0/0, ticks=38489/45764, in_queue=84253, util=92.79% 00:31:53.600 nvme0n2: ios=4269/4608, merge=0/0, ticks=49836/46658, in_queue=96494, util=88.22% 00:31:53.600 nvme0n3: ios=2021/2048, merge=0/0, ticks=20935/14965, in_queue=35900, util=95.11% 00:31:53.600 nvme0n4: ios=3485/3584, merge=0/0, ticks=12865/15184, in_queue=28049, util=90.47% 00:31:53.600 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:53.600 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2725174 00:31:53.600 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:53.600 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:53.600 [global] 00:31:53.600 thread=1 00:31:53.600 invalidate=1 00:31:53.600 rw=read 00:31:53.600 time_based=1 00:31:53.600 runtime=10 00:31:53.600 ioengine=libaio 00:31:53.600 direct=1 00:31:53.600 bs=4096 00:31:53.600 iodepth=1 00:31:53.600 norandommap=1 00:31:53.600 numjobs=1 00:31:53.600 00:31:53.600 [job0] 00:31:53.600 filename=/dev/nvme0n1 00:31:53.600 [job1] 00:31:53.600 filename=/dev/nvme0n2 00:31:53.600 [job2] 00:31:53.600 filename=/dev/nvme0n3 00:31:53.600 [job3] 00:31:53.600 filename=/dev/nvme0n4 00:31:53.600 Could not set queue depth (nvme0n1) 00:31:53.600 Could not set queue depth (nvme0n2) 00:31:53.600 Could not set queue depth (nvme0n3) 00:31:53.600 Could not set queue depth (nvme0n4) 00:31:53.857 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:53.857 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:53.857 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:53.857 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:53.857 fio-3.35 00:31:53.857 Starting 4 threads 00:31:57.131 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:57.131 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:57.131 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11489280, buflen=4096 00:31:57.131 fio: pid=2725331, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:57.131 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:31:57.131 fio: pid=2725330, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:57.131 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.131 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:57.131 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.131 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:57.131 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2686976, buflen=4096 00:31:57.131 fio: pid=2725326, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:57.388 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=65511424, buflen=4096 00:31:57.388 fio: pid=2725329, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:57.388 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.388 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:57.388 00:31:57.388 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2725326: Wed Nov 20 17:26:15 2024 00:31:57.388 read: IOPS=207, BW=828KiB/s (848kB/s)(2624KiB/3168msec) 00:31:57.388 slat (usec): min=6, max=8794, avg=24.56, stdev=349.69 00:31:57.388 clat (usec): min=208, max=42009, avg=4769.95, stdev=12826.15 00:31:57.388 lat (usec): min=215, max=50068, avg=4791.82, stdev=12872.64 00:31:57.388 clat percentiles (usec): 00:31:57.388 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:31:57.388 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:31:57.388 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[41157], 95.00th=[41157], 00:31:57.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:57.388 | 99.99th=[42206] 00:31:57.388 bw ( KiB/s): min= 96, max= 4720, per=3.74%, avg=869.83, stdev=1886.20, samples=6 00:31:57.388 iops : min= 24, max= 1180, avg=217.33, stdev=471.61, samples=6 00:31:57.388 lat (usec) : 250=82.04%, 500=6.70% 00:31:57.388 lat (msec) : 50=11.11% 00:31:57.388 cpu : usr=0.09%, sys=0.19%, ctx=662, majf=0, minf=1 00:31:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.388 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.388 issued rwts: total=657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.388 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2725329: Wed Nov 20 17:26:15 2024 00:31:57.388 read: IOPS=4751, BW=18.6MiB/s (19.5MB/s)(62.5MiB/3366msec) 00:31:57.388 slat (usec): min=6, max=15177, avg=10.52, stdev=203.88 00:31:57.388 clat (usec): min=163, max=494, avg=197.38, stdev=23.73 00:31:57.388 lat (usec): min=171, max=15542, avg=207.89, stdev=207.88 00:31:57.388 clat percentiles (usec): 00:31:57.388 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 182], 00:31:57.388 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:57.388 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 251], 00:31:57.388 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 318], 99.95th=[ 375], 00:31:57.388 | 99.99th=[ 482] 00:31:57.388 bw ( KiB/s): min=17397, max=20520, per=82.96%, avg=19252.83, stdev=1061.24, samples=6 00:31:57.388 iops : min= 4349, max= 5130, avg=4813.17, stdev=265.40, samples=6 00:31:57.388 lat (usec) : 250=94.54%, 500=5.45% 00:31:57.388 cpu : usr=1.49%, sys=3.98%, ctx=16000, majf=0, minf=2 00:31:57.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.388 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.388 issued rwts: total=15995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.388 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2725330: Wed Nov 20 17:26:15 2024 00:31:57.388 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:31:57.388 slat (usec): min=13, max=1812, avg=50.18, stdev=209.19 00:31:57.388 clat (usec): min=405, max=41182, avg=40405.91, stdev=4780.78 00:31:57.388 lat (usec): min=447, max=42995, avg=40456.42, stdev=4787.48 00:31:57.388 clat percentiles (usec): 00:31:57.388 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:57.388 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:57.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:57.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:57.389 | 99.99th=[41157] 00:31:57.389 bw ( KiB/s): min= 96, max= 104, per=0.43%, avg=99.20, stdev= 4.38, samples=5 00:31:57.389 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:31:57.389 lat (usec) : 500=1.37% 00:31:57.389 lat (msec) : 50=97.26% 00:31:57.389 cpu : usr=0.00%, sys=0.14%, ctx=75, majf=0, minf=2 00:31:57.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.389 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.389 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.389 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2725331: Wed Nov 20 17:26:15 2024 00:31:57.389 read: IOPS=1020, BW=4080KiB/s (4178kB/s)(11.0MiB/2750msec) 00:31:57.389 slat (nsec): min=6203, max=35207, avg=7546.33, stdev=2414.09 00:31:57.389 clat (usec): min=213, max=41974, avg=964.01, stdev=5332.94 00:31:57.389 lat (usec): min=220, max=41997, avg=971.55, stdev=5334.13 00:31:57.389 clat percentiles (usec): 00:31:57.389 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:31:57.389 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:31:57.389 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 306], 00:31:57.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:57.389 | 99.99th=[42206] 00:31:57.389 bw ( KiB/s): min= 296, max= 8968, per=16.59%, avg=3849.60, stdev=3715.95, samples=5 00:31:57.389 iops : min= 74, max= 2242, avg=962.40, stdev=928.99, samples=5 00:31:57.389 lat (usec) : 250=55.20%, 500=42.94%, 750=0.07% 00:31:57.389 lat (msec) : 50=1.75% 00:31:57.389 cpu : usr=0.18%, sys=1.02%, ctx=2806, majf=0, minf=2 00:31:57.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.389 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.389 issued rwts: total=2806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.389 00:31:57.389 Run status group 0 (all jobs): 00:31:57.389 READ: bw=22.7MiB/s (23.8MB/s), 98.1KiB/s-18.6MiB/s (100kB/s-19.5MB/s), io=76.3MiB (80.0MB), run=2750-3366msec 00:31:57.389 00:31:57.389 Disk stats (read/write): 00:31:57.389 nvme0n1: ios=694/0, merge=0/0, ticks=4083/0, in_queue=4083, util=99.01% 00:31:57.389 nvme0n2: ios=16034/0, merge=0/0, ticks=4076/0, in_queue=4076, util=98.05% 00:31:57.389 nvme0n3: ios=115/0, merge=0/0, ticks=3791/0, in_queue=3791, util=99.16% 00:31:57.389 nvme0n4: ios=2800/0, merge=0/0, ticks=2562/0, in_queue=2562, util=96.48% 00:31:57.646 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.646 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:57.903 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.903 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:57.903 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:57.903 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:58.159 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:58.159 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2725174 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:58.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:58.417 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:58.674 nvmf hotplug test: fio failed as expected 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.674 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.674 rmmod nvme_tcp 00:31:58.674 rmmod nvme_fabrics 00:31:58.674 rmmod nvme_keyring 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2722503 ']' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2722503 ']' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722503' 00:31:58.933 killing process with pid 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2722503 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.933 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.516 00:32:01.516 real 0m25.937s 00:32:01.516 user 1m30.769s 00:32:01.516 sys 0m11.346s 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.516 ************************************ 00:32:01.516 END TEST nvmf_fio_target 00:32:01.516 ************************************ 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.516 ************************************ 00:32:01.516 START TEST nvmf_bdevio 00:32:01.516 ************************************ 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:01.516 * Looking for test storage... 00:32:01.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.516 --rc genhtml_branch_coverage=1 00:32:01.516 --rc genhtml_function_coverage=1 00:32:01.516 --rc genhtml_legend=1 00:32:01.516 --rc geninfo_all_blocks=1 00:32:01.516 --rc geninfo_unexecuted_blocks=1 00:32:01.516 00:32:01.516 ' 00:32:01.516 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:01.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.516 --rc genhtml_branch_coverage=1 00:32:01.516 --rc genhtml_function_coverage=1 00:32:01.516 --rc genhtml_legend=1 00:32:01.516 --rc geninfo_all_blocks=1 00:32:01.516 --rc geninfo_unexecuted_blocks=1 00:32:01.516 00:32:01.516 ' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:01.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.517 --rc genhtml_branch_coverage=1 00:32:01.517 --rc genhtml_function_coverage=1 00:32:01.517 --rc genhtml_legend=1 00:32:01.517 --rc geninfo_all_blocks=1 00:32:01.517 --rc geninfo_unexecuted_blocks=1 00:32:01.517 00:32:01.517 ' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:01.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.517 --rc genhtml_branch_coverage=1 00:32:01.517 --rc genhtml_function_coverage=1 00:32:01.517 --rc genhtml_legend=1 00:32:01.517 --rc geninfo_all_blocks=1 00:32:01.517 --rc geninfo_unexecuted_blocks=1 00:32:01.517 00:32:01.517 ' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.517 17:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.877 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.877 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.877 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.878 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.878 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.878 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.138 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.138 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.138 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.138 17:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:32:07.138 00:32:07.138 --- 10.0.0.2 ping statistics --- 00:32:07.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.138 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:32:07.138 00:32:07.138 --- 10.0.0.1 ping statistics --- 00:32:07.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.138 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2729572 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2729572 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2729572 ']' 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.138 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.397 [2024-11-20 17:26:25.186232] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.397 [2024-11-20 17:26:25.187197] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:32:07.397 [2024-11-20 17:26:25.187244] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.397 [2024-11-20 17:26:25.264727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.397 [2024-11-20 17:26:25.306531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.397 [2024-11-20 17:26:25.306566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.397 [2024-11-20 17:26:25.306573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.397 [2024-11-20 17:26:25.306579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.397 [2024-11-20 17:26:25.306584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.397 [2024-11-20 17:26:25.308218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:07.397 [2024-11-20 17:26:25.308309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:07.397 [2024-11-20 17:26:25.308417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.397 [2024-11-20 17:26:25.308417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:07.397 [2024-11-20 17:26:25.376178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.397 [2024-11-20 17:26:25.377220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:07.397 [2024-11-20 17:26:25.377348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.397 [2024-11-20 17:26:25.377665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.397 [2024-11-20 17:26:25.377708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:07.397 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.397 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:07.397 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.397 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.397 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 [2024-11-20 17:26:25.445172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 Malloc0 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:07.657 [2024-11-20 17:26:25.529508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.657 { 00:32:07.657 "params": { 00:32:07.657 "name": "Nvme$subsystem", 00:32:07.657 "trtype": "$TEST_TRANSPORT", 00:32:07.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.657 "adrfam": "ipv4", 00:32:07.657 "trsvcid": "$NVMF_PORT", 00:32:07.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.657 "hdgst": ${hdgst:-false}, 00:32:07.657 "ddgst": ${ddgst:-false} 00:32:07.657 }, 00:32:07.657 "method": "bdev_nvme_attach_controller" 00:32:07.657 } 00:32:07.657 EOF 00:32:07.657 )") 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:07.657 17:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:07.657 "params": { 00:32:07.657 "name": "Nvme1", 00:32:07.657 "trtype": "tcp", 00:32:07.657 "traddr": "10.0.0.2", 00:32:07.657 "adrfam": "ipv4", 00:32:07.657 "trsvcid": "4420", 00:32:07.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.657 "hdgst": false, 00:32:07.657 "ddgst": false 00:32:07.657 }, 00:32:07.657 "method": "bdev_nvme_attach_controller" 00:32:07.657 }' 00:32:07.657 [2024-11-20 17:26:25.579769] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:32:07.657 [2024-11-20 17:26:25.579816] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729606 ] 00:32:07.657 [2024-11-20 17:26:25.657951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:07.916 [2024-11-20 17:26:25.701872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.916 [2024-11-20 17:26:25.701980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.916 [2024-11-20 17:26:25.701980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.916 I/O targets: 00:32:07.916 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:07.916 00:32:07.916 00:32:07.916 CUnit - A unit testing framework for C - Version 2.1-3 00:32:07.916 http://cunit.sourceforge.net/ 00:32:07.916 00:32:07.916 00:32:07.916 Suite: bdevio tests on: Nvme1n1 00:32:07.916 Test: blockdev write read block ...passed 00:32:07.916 Test: blockdev write zeroes read block ...passed 00:32:07.916 Test: blockdev write zeroes read no split ...passed 00:32:07.916 Test: blockdev write zeroes read split ...passed 00:32:08.174 Test: blockdev write zeroes read split partial ...passed 00:32:08.174 Test: blockdev reset ...[2024-11-20 17:26:25.962926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:08.174 [2024-11-20 17:26:25.962985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36340 (9): Bad file descriptor 00:32:08.174 [2024-11-20 17:26:25.966107] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:08.174 passed 00:32:08.174 Test: blockdev write read 8 blocks ...passed 00:32:08.174 Test: blockdev write read size > 128k ...passed 00:32:08.174 Test: blockdev write read invalid size ...passed 00:32:08.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:08.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:08.174 Test: blockdev write read max offset ...passed 00:32:08.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:08.174 Test: blockdev writev readv 8 blocks ...passed 00:32:08.174 Test: blockdev writev readv 30 x 1block ...passed 00:32:08.174 Test: blockdev writev readv block ...passed 00:32:08.174 Test: blockdev writev readv size > 128k ...passed 00:32:08.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:08.174 Test: blockdev comparev and writev ...[2024-11-20 17:26:26.135107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.135149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.135441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.135463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.135753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.135774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.135781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.136055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.136065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:08.174 [2024-11-20 17:26:26.136076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:08.174 [2024-11-20 17:26:26.136087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:08.174 passed 00:32:08.433 Test: blockdev nvme passthru rw ...passed 00:32:08.433 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:26:26.218614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:08.433 [2024-11-20 17:26:26.218636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:08.433 [2024-11-20 17:26:26.218747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:08.433 [2024-11-20 17:26:26.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:08.433 [2024-11-20 17:26:26.218869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:08.433 [2024-11-20 17:26:26.218879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:08.433 [2024-11-20 17:26:26.218985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:08.433 [2024-11-20 17:26:26.218994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:08.433 passed 00:32:08.433 Test: blockdev nvme admin passthru ...passed 00:32:08.433 Test: blockdev copy ...passed 00:32:08.433 00:32:08.433 Run Summary: Type Total Ran Passed Failed Inactive 00:32:08.433 suites 1 1 n/a 0 0 00:32:08.433 tests 23 23 23 0 0 00:32:08.433 asserts 152 152 152 0 n/a 00:32:08.433 00:32:08.433 Elapsed time = 0.849 seconds 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.433 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.433 rmmod nvme_tcp 00:32:08.433 rmmod nvme_fabrics 00:32:08.433 rmmod nvme_keyring 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2729572 ']' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2729572 ']' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729572' 00:32:08.693 killing process with pid 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2729572 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.693 17:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.231 00:32:11.231 real 0m9.704s 00:32:11.231 user 0m7.651s 00:32:11.231 sys 0m5.090s 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:11.231 ************************************ 00:32:11.231 END TEST nvmf_bdevio 00:32:11.231 ************************************ 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:11.231 00:32:11.231 real 4m30.876s 00:32:11.231 user 9m2.487s 00:32:11.231 sys 1m51.030s 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.231 17:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:11.231 ************************************ 00:32:11.231 END TEST nvmf_target_core_interrupt_mode 00:32:11.231 ************************************ 00:32:11.231 17:26:28 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:11.231 17:26:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:11.231 17:26:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.231 17:26:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.231 ************************************ 00:32:11.231 START TEST nvmf_interrupt 00:32:11.231 ************************************ 00:32:11.231 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:11.231 * Looking for test storage... 00:32:11.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.231 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:11.231 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:11.231 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.231 --rc genhtml_branch_coverage=1 00:32:11.231 --rc genhtml_function_coverage=1 00:32:11.231 --rc genhtml_legend=1 00:32:11.231 --rc geninfo_all_blocks=1 00:32:11.231 --rc geninfo_unexecuted_blocks=1 00:32:11.231 00:32:11.231 ' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.231 --rc genhtml_branch_coverage=1 00:32:11.231 --rc genhtml_function_coverage=1 00:32:11.231 --rc genhtml_legend=1 00:32:11.231 --rc geninfo_all_blocks=1 00:32:11.231 --rc geninfo_unexecuted_blocks=1 00:32:11.231 00:32:11.231 ' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.231 --rc genhtml_branch_coverage=1 00:32:11.231 --rc genhtml_function_coverage=1 00:32:11.231 --rc genhtml_legend=1 00:32:11.231 --rc geninfo_all_blocks=1 00:32:11.231 --rc geninfo_unexecuted_blocks=1 00:32:11.231 00:32:11.231 ' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:11.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.231 --rc genhtml_branch_coverage=1 00:32:11.231 --rc genhtml_function_coverage=1 00:32:11.231 --rc genhtml_legend=1 00:32:11.231 --rc geninfo_all_blocks=1 00:32:11.231 --rc geninfo_unexecuted_blocks=1 00:32:11.231 00:32:11.231 ' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.231 17:26:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.232 17:26:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.804 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:17.805 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:17.805 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:17.805 Found net devices under 0000:86:00.0: cvl_0_0 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:17.805 Found net devices under 0000:86:00.1: cvl_0_1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:32:17.805 00:32:17.805 --- 10.0.0.2 ping statistics --- 00:32:17.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.805 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:32:17.805 00:32:17.805 --- 10.0.0.1 ping statistics --- 00:32:17.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.805 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:17.805 17:26:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2733368 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2733368 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2733368 ']' 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.805 [2024-11-20 17:26:35.093461] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.805 [2024-11-20 17:26:35.094338] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:32:17.805 [2024-11-20 17:26:35.094369] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.805 [2024-11-20 17:26:35.173618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:17.805 [2024-11-20 17:26:35.214503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.805 [2024-11-20 17:26:35.214540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.805 [2024-11-20 17:26:35.214547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.805 [2024-11-20 17:26:35.214553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.805 [2024-11-20 17:26:35.214558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.805 [2024-11-20 17:26:35.215700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.805 [2024-11-20 17:26:35.215702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.805 [2024-11-20 17:26:35.282535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.805 [2024-11-20 17:26:35.283118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:17.805 [2024-11-20 17:26:35.283339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.805 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:17.806 5000+0 records in 00:32:17.806 5000+0 records out 00:32:17.806 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0168242 s, 609 MB/s 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 AIO0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 [2024-11-20 17:26:35.408597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:17.806 [2024-11-20 17:26:35.448813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2733368 0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 0 idle 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733368 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733368 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2733368 1 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 1 idle 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733373 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733373 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2733412 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2733368 0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2733368 0 busy 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:17.806 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.064 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733368 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0' 00:32:18.064 17:26:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733368 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2733368 1 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2733368 1 busy 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:18.065 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733373 root 20 0 128.2g 46848 33792 R 93.3 0.0 0:00.27 reactor_1' 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733373 root 20 0 128.2g 46848 33792 R 93.3 0.0 0:00.27 reactor_1 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.322 17:26:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2733412 00:32:28.285 Initializing NVMe Controllers 00:32:28.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.285 Controller IO queue size 256, less than required. 00:32:28.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:28.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:28.285 Initialization complete. Launching workers. 00:32:28.285 ======================================================== 00:32:28.285 Latency(us) 00:32:28.285 Device Information : IOPS MiB/s Average min max 00:32:28.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16429.50 64.18 15589.68 2915.36 30230.22 00:32:28.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16596.70 64.83 15429.70 7127.78 26819.47 00:32:28.285 ======================================================== 00:32:28.285 Total : 33026.20 129.01 15509.28 2915.36 30230.22 00:32:28.285 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2733368 0 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 0 idle 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:28.285 17:26:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733368 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.25 reactor_0' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733368 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.25 reactor_0 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2733368 1 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 1 idle 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733373 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733373 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:28.285 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:28.544 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:28.544 17:26:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:28.544 17:26:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:28.804 17:26:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:28.804 17:26:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:28.804 17:26:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:28.804 17:26:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:28.804 17:26:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:31.336 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2733368 0 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 0 idle 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:31.337 17:26:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733368 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733368 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2733368 1 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2733368 1 idle 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2733368 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2733368 -w 256 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2733373 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2733373 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:31.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:31.337 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.596 rmmod nvme_tcp 00:32:31.596 rmmod nvme_fabrics 00:32:31.596 rmmod nvme_keyring 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2733368 ']' 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2733368 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2733368 ']' 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2733368 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733368 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733368' 00:32:31.596 killing process with pid 2733368 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2733368 00:32:31.596 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2733368 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.853 17:26:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.756 17:26:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.756 00:32:33.756 real 0m22.862s 00:32:33.757 user 0m39.440s 00:32:33.757 sys 0m8.582s 00:32:33.757 17:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.757 17:26:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:33.757 ************************************ 00:32:33.757 END TEST nvmf_interrupt 00:32:33.757 ************************************ 00:32:33.757 00:32:33.757 real 27m24.204s 00:32:33.757 user 56m28.700s 00:32:33.757 sys 9m19.850s 00:32:33.757 17:26:51 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.757 17:26:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.757 ************************************ 00:32:33.757 END TEST nvmf_tcp 00:32:33.757 ************************************ 00:32:34.014 17:26:51 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:34.014 17:26:51 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:34.014 17:26:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:34.014 17:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.014 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:32:34.014 ************************************ 00:32:34.014 START TEST spdkcli_nvmf_tcp 00:32:34.014 ************************************ 00:32:34.014 17:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:34.014 * Looking for test storage... 00:32:34.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:34.014 17:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:34.014 17:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:34.014 17:26:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:34.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.014 --rc genhtml_branch_coverage=1 00:32:34.014 --rc genhtml_function_coverage=1 00:32:34.014 --rc genhtml_legend=1 00:32:34.014 --rc geninfo_all_blocks=1 00:32:34.014 --rc geninfo_unexecuted_blocks=1 00:32:34.014 00:32:34.014 ' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:34.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.014 --rc genhtml_branch_coverage=1 00:32:34.014 --rc genhtml_function_coverage=1 00:32:34.014 --rc genhtml_legend=1 00:32:34.014 --rc geninfo_all_blocks=1 00:32:34.014 --rc geninfo_unexecuted_blocks=1 00:32:34.014 00:32:34.014 ' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:34.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.014 --rc genhtml_branch_coverage=1 00:32:34.014 --rc genhtml_function_coverage=1 00:32:34.014 --rc genhtml_legend=1 00:32:34.014 --rc geninfo_all_blocks=1 00:32:34.014 --rc geninfo_unexecuted_blocks=1 00:32:34.014 00:32:34.014 ' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:34.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.014 --rc genhtml_branch_coverage=1 00:32:34.014 --rc genhtml_function_coverage=1 00:32:34.014 --rc genhtml_legend=1 00:32:34.014 --rc geninfo_all_blocks=1 00:32:34.014 --rc geninfo_unexecuted_blocks=1 00:32:34.014 00:32:34.014 ' 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.014 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2736169 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2736169 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2736169 ']' 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.273 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.273 [2024-11-20 17:26:52.127577] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:32:34.273 [2024-11-20 17:26:52.127624] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736169 ] 00:32:34.273 [2024-11-20 17:26:52.199939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:34.273 [2024-11-20 17:26:52.244671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.273 [2024-11-20 17:26:52.244673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.531 17:26:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:34.531 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:34.531 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:34.531 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:34.531 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:34.531 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:34.531 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:34.531 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:34.531 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:34.531 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:34.531 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:34.531 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:34.531 ' 00:32:37.054 [2024-11-20 17:26:55.067796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.421 [2024-11-20 17:26:56.408284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:40.935 [2024-11-20 17:26:58.899893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:43.454 [2024-11-20 17:27:01.058798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:44.826 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:44.826 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:44.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.826 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:44.826 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:44.826 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:44.826 17:27:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.391 17:27:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:45.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:45.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:45.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:45.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:45.391 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:45.391 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:45.391 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:45.391 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:45.391 ' 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:51.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:51.947 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:51.947 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:51.947 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:51.947 17:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:51.947 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.947 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2736169 ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2736169' 00:32:51.947 killing process with pid 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2736169 ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2736169 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2736169 ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2736169 00:32:51.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2736169) - No such process 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2736169 is not found' 00:32:51.947 Process with pid 2736169 is not found 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:51.947 00:32:51.947 real 0m17.355s 00:32:51.947 user 0m38.247s 00:32:51.947 sys 0m0.767s 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.947 17:27:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:51.947 ************************************ 00:32:51.947 END TEST spdkcli_nvmf_tcp 00:32:51.947 ************************************ 00:32:51.947 17:27:09 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:51.947 17:27:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:51.947 17:27:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.947 17:27:09 -- common/autotest_common.sh@10 -- # set +x 00:32:51.947 ************************************ 00:32:51.947 START TEST nvmf_identify_passthru 00:32:51.947 ************************************ 00:32:51.947 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:51.947 * Looking for test storage... 00:32:51.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.947 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:51.947 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.947 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:51.947 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:51.947 17:27:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:51.948 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.948 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.948 --rc genhtml_branch_coverage=1 00:32:51.948 --rc genhtml_function_coverage=1 00:32:51.948 --rc genhtml_legend=1 00:32:51.948 --rc geninfo_all_blocks=1 00:32:51.948 --rc geninfo_unexecuted_blocks=1 00:32:51.948 00:32:51.948 ' 00:32:51.948 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.948 --rc genhtml_branch_coverage=1 00:32:51.948 --rc genhtml_function_coverage=1 00:32:51.948 --rc genhtml_legend=1 00:32:51.948 --rc geninfo_all_blocks=1 00:32:51.948 --rc geninfo_unexecuted_blocks=1 00:32:51.948 00:32:51.948 ' 00:32:51.948 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.948 --rc genhtml_branch_coverage=1 00:32:51.948 --rc genhtml_function_coverage=1 00:32:51.948 --rc genhtml_legend=1 00:32:51.948 --rc geninfo_all_blocks=1 00:32:51.948 --rc geninfo_unexecuted_blocks=1 00:32:51.948 00:32:51.948 ' 00:32:51.948 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.948 --rc genhtml_branch_coverage=1 00:32:51.948 --rc genhtml_function_coverage=1 00:32:51.948 --rc genhtml_legend=1 00:32:51.948 --rc geninfo_all_blocks=1 00:32:51.948 --rc geninfo_unexecuted_blocks=1 00:32:51.948 00:32:51.948 ' 00:32:51.948 17:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:51.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.948 17:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.948 17:27:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:51.948 17:27:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.948 17:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:51.948 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.949 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:51.949 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.949 17:27:09 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.949 17:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:57.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:57.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.315 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:57.316 Found net devices under 0000:86:00.0: cvl_0_0 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:57.316 Found net devices under 0000:86:00.1: cvl_0_1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:32:57.316 00:32:57.316 --- 10.0.0.2 ping statistics --- 00:32:57.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.316 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:32:57.316 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:57.576 00:32:57.576 --- 10.0.0.1 ping statistics --- 00:32:57.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.576 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.576 17:27:15 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.576 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.576 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:57.576 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:57.577 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:57.577 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:57.577 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:57.577 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:57.577 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:57.577 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:02.843 17:27:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:33:02.843 17:27:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:02.843 17:27:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:02.843 17:27:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2744101 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:07.027 17:27:24 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2744101 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2744101 ']' 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.027 17:27:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.027 [2024-11-20 17:27:24.881052] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:33:07.027 [2024-11-20 17:27:24.881097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.027 [2024-11-20 17:27:24.960606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.027 [2024-11-20 17:27:25.003731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.027 [2024-11-20 17:27:25.003768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.027 [2024-11-20 17:27:25.003774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.027 [2024-11-20 17:27:25.003781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.027 [2024-11-20 17:27:25.003786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.027 [2024-11-20 17:27:25.005286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.027 [2024-11-20 17:27:25.005392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.027 [2024-11-20 17:27:25.005419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.027 [2024-11-20 17:27:25.005420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:07.027 17:27:25 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.027 INFO: Log level set to 20 00:33:07.027 INFO: Requests: 00:33:07.027 { 00:33:07.027 "jsonrpc": "2.0", 00:33:07.027 "method": "nvmf_set_config", 00:33:07.027 "id": 1, 00:33:07.027 "params": { 00:33:07.027 "admin_cmd_passthru": { 00:33:07.027 "identify_ctrlr": true 00:33:07.027 } 00:33:07.027 } 00:33:07.027 } 00:33:07.027 00:33:07.027 INFO: response: 00:33:07.027 { 00:33:07.027 "jsonrpc": "2.0", 00:33:07.027 "id": 1, 00:33:07.027 "result": true 00:33:07.027 } 00:33:07.027 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.027 17:27:25 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.027 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.027 INFO: Setting log level to 20 00:33:07.027 INFO: Setting log level to 20 00:33:07.027 INFO: Log level set to 20 00:33:07.027 INFO: Log level set to 20 00:33:07.027 INFO: Requests: 00:33:07.027 { 00:33:07.027 "jsonrpc": "2.0", 00:33:07.027 "method": "framework_start_init", 00:33:07.027 "id": 1 00:33:07.027 } 00:33:07.027 00:33:07.027 INFO: Requests: 00:33:07.027 { 00:33:07.027 "jsonrpc": "2.0", 00:33:07.027 "method": "framework_start_init", 00:33:07.027 "id": 1 00:33:07.027 } 00:33:07.027 00:33:07.285 [2024-11-20 17:27:25.110757] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:07.286 INFO: response: 00:33:07.286 { 00:33:07.286 "jsonrpc": "2.0", 00:33:07.286 "id": 1, 00:33:07.286 "result": true 00:33:07.286 } 00:33:07.286 00:33:07.286 INFO: response: 00:33:07.286 { 00:33:07.286 "jsonrpc": "2.0", 00:33:07.286 "id": 1, 00:33:07.286 "result": true 00:33:07.286 } 00:33:07.286 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.286 17:27:25 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.286 INFO: Setting log level to 40 00:33:07.286 INFO: Setting log level to 40 00:33:07.286 INFO: Setting log level to 40 00:33:07.286 [2024-11-20 17:27:25.124132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.286 17:27:25 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.286 17:27:25 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.286 17:27:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 Nvme0n1 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 [2024-11-20 17:27:28.033603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 [ 00:33:10.616 { 00:33:10.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:10.616 "subtype": "Discovery", 00:33:10.616 "listen_addresses": [], 00:33:10.616 "allow_any_host": true, 00:33:10.616 "hosts": [] 00:33:10.616 }, 00:33:10.616 { 00:33:10.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.616 "subtype": "NVMe", 00:33:10.616 "listen_addresses": [ 00:33:10.616 { 00:33:10.616 "trtype": "TCP", 00:33:10.616 "adrfam": "IPv4", 00:33:10.616 "traddr": "10.0.0.2", 00:33:10.616 "trsvcid": "4420" 00:33:10.616 } 00:33:10.616 ], 00:33:10.616 "allow_any_host": true, 00:33:10.616 "hosts": [], 00:33:10.616 "serial_number": "SPDK00000000000001", 00:33:10.616 "model_number": "SPDK bdev Controller", 00:33:10.616 "max_namespaces": 1, 00:33:10.616 "min_cntlid": 1, 00:33:10.616 "max_cntlid": 65519, 00:33:10.616 "namespaces": [ 00:33:10.616 { 00:33:10.616 "nsid": 1, 00:33:10.616 "bdev_name": "Nvme0n1", 00:33:10.616 "name": "Nvme0n1", 00:33:10.616 "nguid": "75B17E21CF194DFA8D0B59EF99CCF59A", 00:33:10.616 "uuid": "75b17e21-cf19-4dfa-8d0b-59ef99ccf59a" 00:33:10.616 } 00:33:10.616 ] 00:33:10.616 } 00:33:10.616 ] 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:10.616 17:27:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.616 rmmod nvme_tcp 00:33:10.616 rmmod nvme_fabrics 00:33:10.616 rmmod nvme_keyring 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2744101 ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2744101 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2744101 ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2744101 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744101 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.616 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.617 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744101' 00:33:10.617 killing process with pid 2744101 00:33:10.617 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2744101 00:33:10.617 17:27:28 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2744101 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:12.587 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:12.846 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:12.847 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:12.847 17:27:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.847 17:27:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:12.847 17:27:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.751 17:27:32 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:14.751 00:33:14.751 real 0m23.402s 00:33:14.751 user 0m30.062s 00:33:14.751 sys 0m6.234s 00:33:14.751 17:27:32 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.751 17:27:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:14.751 ************************************ 00:33:14.751 END TEST nvmf_identify_passthru 00:33:14.751 ************************************ 00:33:14.751 17:27:32 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:14.751 17:27:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.751 17:27:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.751 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:33:14.751 ************************************ 00:33:14.751 START TEST nvmf_dif 00:33:14.751 ************************************ 00:33:14.751 17:27:32 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:15.011 * Looking for test storage... 00:33:15.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.011 17:27:32 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.011 17:27:32 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.011 --rc genhtml_branch_coverage=1 00:33:15.011 --rc genhtml_function_coverage=1 00:33:15.011 --rc genhtml_legend=1 00:33:15.011 --rc geninfo_all_blocks=1 00:33:15.012 --rc geninfo_unexecuted_blocks=1 00:33:15.012 00:33:15.012 ' 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:15.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.012 --rc genhtml_branch_coverage=1 00:33:15.012 --rc genhtml_function_coverage=1 00:33:15.012 --rc genhtml_legend=1 00:33:15.012 --rc geninfo_all_blocks=1 00:33:15.012 --rc geninfo_unexecuted_blocks=1 00:33:15.012 00:33:15.012 ' 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:15.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.012 --rc genhtml_branch_coverage=1 00:33:15.012 --rc genhtml_function_coverage=1 00:33:15.012 --rc genhtml_legend=1 00:33:15.012 --rc geninfo_all_blocks=1 00:33:15.012 --rc geninfo_unexecuted_blocks=1 00:33:15.012 00:33:15.012 ' 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:15.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.012 --rc genhtml_branch_coverage=1 00:33:15.012 --rc genhtml_function_coverage=1 00:33:15.012 --rc genhtml_legend=1 00:33:15.012 --rc geninfo_all_blocks=1 00:33:15.012 --rc geninfo_unexecuted_blocks=1 00:33:15.012 00:33:15.012 ' 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.012 17:27:32 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.012 17:27:32 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.012 17:27:32 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.012 17:27:32 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.012 17:27:32 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.012 17:27:32 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.012 17:27:32 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.012 17:27:32 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:15.012 17:27:32 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:15.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:15.012 17:27:32 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.012 17:27:32 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.012 17:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:21.579 17:27:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:21.580 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:21.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:21.580 Found net devices under 0000:86:00.0: cvl_0_0 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:21.580 Found net devices under 0000:86:00.1: cvl_0_1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:33:21.580 00:33:21.580 --- 10.0.0.2 ping statistics --- 00:33:21.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.580 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:33:21.580 00:33:21.580 --- 10.0.0.1 ping statistics --- 00:33:21.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.580 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:21.580 17:27:38 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.486 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:23.486 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:23.486 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:23.746 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.746 17:27:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:23.746 17:27:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2749771 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2749771 00:33:23.746 17:27:41 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2749771 ']' 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.746 17:27:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.005 [2024-11-20 17:27:41.788971] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:33:24.005 [2024-11-20 17:27:41.789013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.005 [2024-11-20 17:27:41.868074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.005 [2024-11-20 17:27:41.908496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.005 [2024-11-20 17:27:41.908529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.005 [2024-11-20 17:27:41.908536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.005 [2024-11-20 17:27:41.908543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.005 [2024-11-20 17:27:41.908548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.005 [2024-11-20 17:27:41.909099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.005 17:27:41 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.005 17:27:41 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:24.005 17:27:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.005 17:27:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.005 17:27:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.005 17:27:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.005 17:27:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:24.005 17:27:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:24.005 17:27:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.005 17:27:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.005 [2024-11-20 17:27:42.043850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.264 17:27:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.264 17:27:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:24.264 17:27:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:24.264 17:27:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.264 17:27:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:24.264 ************************************ 00:33:24.264 START TEST fio_dif_1_default 00:33:24.264 ************************************ 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.264 bdev_null0 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.264 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:24.265 [2024-11-20 17:27:42.116167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:24.265 { 00:33:24.265 "params": { 00:33:24.265 "name": "Nvme$subsystem", 00:33:24.265 "trtype": "$TEST_TRANSPORT", 00:33:24.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:24.265 "adrfam": "ipv4", 00:33:24.265 "trsvcid": "$NVMF_PORT", 00:33:24.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:24.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:24.265 "hdgst": ${hdgst:-false}, 00:33:24.265 "ddgst": ${ddgst:-false} 00:33:24.265 }, 00:33:24.265 "method": "bdev_nvme_attach_controller" 00:33:24.265 } 00:33:24.265 EOF 00:33:24.265 )") 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:24.265 "params": { 00:33:24.265 "name": "Nvme0", 00:33:24.265 "trtype": "tcp", 00:33:24.265 "traddr": "10.0.0.2", 00:33:24.265 "adrfam": "ipv4", 00:33:24.265 "trsvcid": "4420", 00:33:24.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.265 "hdgst": false, 00:33:24.265 "ddgst": false 00:33:24.265 }, 00:33:24.265 "method": "bdev_nvme_attach_controller" 00:33:24.265 }' 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:24.265 17:27:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:24.524 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:24.524 fio-3.35 00:33:24.524 Starting 1 thread 00:33:36.733 00:33:36.733 filename0: (groupid=0, jobs=1): err= 0: pid=2750040: Wed Nov 20 17:27:53 2024 00:33:36.733 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:33:36.733 slat (nsec): min=5928, max=33478, avg=6264.17, stdev=1106.63 00:33:36.733 clat (usec): min=40842, max=44569, avg=41328.05, stdev=508.28 00:33:36.733 lat (usec): min=40848, max=44603, avg=41334.31, stdev=508.52 00:33:36.733 clat percentiles (usec): 00:33:36.733 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:36.733 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:36.733 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:36.733 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:33:36.733 | 99.99th=[44827] 00:33:36.733 bw ( KiB/s): min= 352, max= 416, per=100.00%, avg=387.37, stdev=18.15, samples=19 00:33:36.733 iops : min= 88, max= 104, avg=96.84, stdev= 4.54, samples=19 00:33:36.733 lat (msec) : 50=100.00% 00:33:36.733 cpu : usr=92.21%, sys=7.54%, ctx=15, majf=0, minf=0 00:33:36.733 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.733 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.733 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:36.733 00:33:36.733 Run status group 0 (all jobs): 00:33:36.733 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10006-10006msec 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:36.733 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 00:33:36.734 real 0m11.245s 00:33:36.734 user 0m15.990s 00:33:36.734 sys 0m1.039s 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 ************************************ 00:33:36.734 END TEST fio_dif_1_default 00:33:36.734 ************************************ 00:33:36.734 17:27:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:36.734 17:27:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:36.734 17:27:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 ************************************ 00:33:36.734 START TEST fio_dif_1_multi_subsystems 00:33:36.734 ************************************ 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 bdev_null0 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 [2024-11-20 17:27:53.437063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 bdev_null1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.734 { 00:33:36.734 "params": { 00:33:36.734 "name": "Nvme$subsystem", 00:33:36.734 "trtype": "$TEST_TRANSPORT", 00:33:36.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.734 "adrfam": "ipv4", 00:33:36.734 "trsvcid": "$NVMF_PORT", 00:33:36.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.734 "hdgst": ${hdgst:-false}, 00:33:36.734 "ddgst": ${ddgst:-false} 00:33:36.734 }, 00:33:36.734 "method": "bdev_nvme_attach_controller" 00:33:36.734 } 00:33:36.734 EOF 00:33:36.734 )") 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.734 { 00:33:36.734 "params": { 00:33:36.734 "name": "Nvme$subsystem", 00:33:36.734 "trtype": "$TEST_TRANSPORT", 00:33:36.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.734 "adrfam": "ipv4", 00:33:36.734 "trsvcid": "$NVMF_PORT", 00:33:36.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.734 "hdgst": ${hdgst:-false}, 00:33:36.734 "ddgst": ${ddgst:-false} 00:33:36.734 }, 00:33:36.734 "method": "bdev_nvme_attach_controller" 00:33:36.734 } 00:33:36.734 EOF 00:33:36.734 )") 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:36.734 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.734 "params": { 00:33:36.734 "name": "Nvme0", 00:33:36.734 "trtype": "tcp", 00:33:36.734 "traddr": "10.0.0.2", 00:33:36.734 "adrfam": "ipv4", 00:33:36.734 "trsvcid": "4420", 00:33:36.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.734 "hdgst": false, 00:33:36.734 "ddgst": false 00:33:36.734 }, 00:33:36.734 "method": "bdev_nvme_attach_controller" 00:33:36.734 },{ 00:33:36.734 "params": { 00:33:36.734 "name": "Nvme1", 00:33:36.734 "trtype": "tcp", 00:33:36.734 "traddr": "10.0.0.2", 00:33:36.734 "adrfam": "ipv4", 00:33:36.734 "trsvcid": "4420", 00:33:36.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.735 "hdgst": false, 00:33:36.735 "ddgst": false 00:33:36.735 }, 00:33:36.735 "method": "bdev_nvme_attach_controller" 00:33:36.735 }' 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:36.735 17:27:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:36.735 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:36.735 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:36.735 fio-3.35 00:33:36.735 Starting 2 threads 00:33:46.713 00:33:46.713 filename0: (groupid=0, jobs=1): err= 0: pid=2751984: Wed Nov 20 17:28:04 2024 00:33:46.713 read: IOPS=194, BW=780KiB/s (799kB/s)(7808KiB/10012msec) 00:33:46.713 slat (nsec): min=5906, max=53289, avg=7107.41, stdev=2322.84 00:33:46.713 clat (usec): min=369, max=42533, avg=20494.55, stdev=20418.36 00:33:46.713 lat (usec): min=376, max=42540, avg=20501.66, stdev=20417.75 00:33:46.713 clat percentiles (usec): 00:33:46.713 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:33:46.713 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 461], 60.00th=[40633], 00:33:46.713 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:46.713 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:46.713 | 99.99th=[42730] 00:33:46.713 bw ( KiB/s): min= 704, max= 896, per=50.42%, avg=779.20, stdev=40.58, samples=20 00:33:46.713 iops : min= 176, max= 224, avg=194.80, stdev=10.14, samples=20 00:33:46.713 lat (usec) : 500=50.82% 00:33:46.713 lat (msec) : 50=49.18% 00:33:46.713 cpu : usr=97.06%, sys=2.68%, ctx=9, majf=0, minf=0 00:33:46.713 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.713 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.713 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:46.713 filename1: (groupid=0, jobs=1): err= 0: pid=2751985: Wed Nov 20 17:28:04 2024 00:33:46.713 read: IOPS=191, BW=766KiB/s (785kB/s)(7680KiB/10024msec) 00:33:46.713 slat (nsec): min=6041, max=54777, avg=7030.45, stdev=2136.73 00:33:46.713 clat (usec): min=366, max=43408, avg=20862.15, stdev=20470.40 00:33:46.713 lat (usec): min=373, max=43463, avg=20869.18, stdev=20469.85 00:33:46.713 clat percentiles (usec): 00:33:46.713 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:33:46.713 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 478], 60.00th=[40633], 00:33:46.713 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:33:46.713 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:33:46.713 | 99.99th=[43254] 00:33:46.713 bw ( KiB/s): min= 672, max= 832, per=49.58%, avg=766.40, stdev=26.42, samples=20 00:33:46.713 iops : min= 168, max= 208, avg=191.60, stdev= 6.60, samples=20 00:33:46.713 lat (usec) : 500=50.00% 00:33:46.713 lat (msec) : 50=50.00% 00:33:46.713 cpu : usr=96.94%, sys=2.79%, ctx=15, majf=0, minf=0 00:33:46.713 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.713 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.714 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:46.714 00:33:46.714 Run status group 0 (all jobs): 00:33:46.714 READ: bw=1545KiB/s (1582kB/s), 766KiB/s-780KiB/s (785kB/s-799kB/s), io=15.1MiB (15.9MB), run=10012-10024msec 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 00:33:46.973 real 0m11.500s 00:33:46.973 user 0m26.461s 00:33:46.973 sys 0m0.887s 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 ************************************ 00:33:46.973 END TEST fio_dif_1_multi_subsystems 00:33:46.973 ************************************ 00:33:46.973 17:28:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:46.973 17:28:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:46.973 17:28:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 ************************************ 00:33:46.973 START TEST fio_dif_rand_params 00:33:46.973 ************************************ 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 bdev_null0 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.973 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:46.973 [2024-11-20 17:28:05.012827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.232 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.233 { 00:33:47.233 "params": { 00:33:47.233 "name": "Nvme$subsystem", 00:33:47.233 "trtype": "$TEST_TRANSPORT", 00:33:47.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.233 "adrfam": "ipv4", 00:33:47.233 "trsvcid": "$NVMF_PORT", 00:33:47.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.233 "hdgst": ${hdgst:-false}, 00:33:47.233 "ddgst": ${ddgst:-false} 00:33:47.233 }, 00:33:47.233 "method": "bdev_nvme_attach_controller" 00:33:47.233 } 00:33:47.233 EOF 00:33:47.233 )") 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.233 "params": { 00:33:47.233 "name": "Nvme0", 00:33:47.233 "trtype": "tcp", 00:33:47.233 "traddr": "10.0.0.2", 00:33:47.233 "adrfam": "ipv4", 00:33:47.233 "trsvcid": "4420", 00:33:47.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.233 "hdgst": false, 00:33:47.233 "ddgst": false 00:33:47.233 }, 00:33:47.233 "method": "bdev_nvme_attach_controller" 00:33:47.233 }' 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:47.233 17:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.491 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:47.491 ... 00:33:47.491 fio-3.35 00:33:47.491 Starting 3 threads 00:33:54.058 00:33:54.058 filename0: (groupid=0, jobs=1): err= 0: pid=2753890: Wed Nov 20 17:28:11 2024 00:33:54.058 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(210MiB/5048msec) 00:33:54.058 slat (nsec): min=6095, max=65439, avg=15136.35, stdev=7041.06 00:33:54.058 clat (usec): min=5144, max=50018, avg=8970.73, stdev=4194.57 00:33:54.058 lat (usec): min=5151, max=50046, avg=8985.86, stdev=4195.05 00:33:54.058 clat percentiles (usec): 00:33:54.058 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7767], 00:33:54.058 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:54.058 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10421], 00:33:54.058 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:33:54.058 | 99.99th=[50070] 00:33:54.058 bw ( KiB/s): min=36608, max=47872, per=35.85%, avg=42956.80, stdev=2953.08, samples=10 00:33:54.058 iops : min= 286, max= 374, avg=335.60, stdev=23.07, samples=10 00:33:54.058 lat (msec) : 10=91.31%, 20=7.68%, 50=0.95%, 100=0.06% 00:33:54.058 cpu : usr=94.75%, sys=4.74%, ctx=155, majf=0, minf=63 00:33:54.058 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.058 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.058 filename0: (groupid=0, jobs=1): err= 0: pid=2753891: Wed Nov 20 17:28:11 2024 00:33:54.058 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(194MiB/5045msec) 00:33:54.058 slat (nsec): min=6101, max=43674, avg=13472.55, stdev=6140.45 00:33:54.058 clat (usec): min=4849, max=52306, avg=9696.91, stdev=4417.22 00:33:54.058 lat (usec): min=4856, max=52330, avg=9710.38, stdev=4417.35 00:33:54.058 clat percentiles (usec): 00:33:54.058 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8225], 00:33:54.058 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:54.058 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:33:54.058 | 99.00th=[46400], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:33:54.058 | 99.99th=[52167] 00:33:54.058 bw ( KiB/s): min=35840, max=43520, per=33.16%, avg=39731.20, stdev=2527.94, samples=10 00:33:54.058 iops : min= 280, max= 340, avg=310.40, stdev=19.75, samples=10 00:33:54.058 lat (msec) : 10=68.08%, 20=30.82%, 50=0.58%, 100=0.51% 00:33:54.059 cpu : usr=95.06%, sys=4.62%, ctx=9, majf=0, minf=52 00:33:54.059 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.059 issued rwts: total=1554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.059 filename0: (groupid=0, jobs=1): err= 0: pid=2753892: Wed Nov 20 17:28:11 2024 00:33:54.059 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5005msec) 00:33:54.059 slat (nsec): min=6082, max=70760, avg=13548.70, stdev=6416.24 00:33:54.059 clat (usec): min=3511, max=52158, avg=10054.07, stdev=4003.12 00:33:54.059 lat (usec): min=3519, max=52184, avg=10067.62, stdev=4003.39 00:33:54.059 clat percentiles (usec): 00:33:54.059 | 1.00th=[ 4948], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8717], 00:33:54.059 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:33:54.059 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[12125], 00:33:54.059 | 99.00th=[13173], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:33:54.059 | 99.99th=[52167] 00:33:54.059 bw ( KiB/s): min=28416, max=41984, per=31.82%, avg=38118.40, stdev=3822.80, samples=10 00:33:54.059 iops : min= 222, max= 328, avg=297.80, stdev=29.87, samples=10 00:33:54.059 lat (msec) : 4=0.54%, 10=50.23%, 20=48.42%, 50=0.40%, 100=0.40% 00:33:54.059 cpu : usr=95.22%, sys=4.46%, ctx=10, majf=0, minf=81 00:33:54.059 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.059 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:54.059 00:33:54.059 Run status group 0 (all jobs): 00:33:54.059 READ: bw=117MiB/s (123MB/s), 37.2MiB/s-41.6MiB/s (39.0MB/s-43.6MB/s), io=591MiB (619MB), run=5005-5048msec 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 bdev_null0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 [2024-11-20 17:28:11.394237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 bdev_null1 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 bdev_null2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:54.059 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.060 { 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme$subsystem", 00:33:54.060 "trtype": "$TEST_TRANSPORT", 00:33:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "$NVMF_PORT", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.060 "hdgst": ${hdgst:-false}, 00:33:54.060 "ddgst": ${ddgst:-false} 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 } 00:33:54.060 EOF 00:33:54.060 )") 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.060 { 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme$subsystem", 00:33:54.060 "trtype": "$TEST_TRANSPORT", 00:33:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "$NVMF_PORT", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.060 "hdgst": ${hdgst:-false}, 00:33:54.060 "ddgst": ${ddgst:-false} 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 } 00:33:54.060 EOF 00:33:54.060 )") 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.060 { 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme$subsystem", 00:33:54.060 "trtype": "$TEST_TRANSPORT", 00:33:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "$NVMF_PORT", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.060 "hdgst": ${hdgst:-false}, 00:33:54.060 "ddgst": ${ddgst:-false} 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 } 00:33:54.060 EOF 00:33:54.060 )") 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme0", 00:33:54.060 "trtype": "tcp", 00:33:54.060 "traddr": "10.0.0.2", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "4420", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.060 "hdgst": false, 00:33:54.060 "ddgst": false 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 },{ 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme1", 00:33:54.060 "trtype": "tcp", 00:33:54.060 "traddr": "10.0.0.2", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "4420", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.060 "hdgst": false, 00:33:54.060 "ddgst": false 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 },{ 00:33:54.060 "params": { 00:33:54.060 "name": "Nvme2", 00:33:54.060 "trtype": "tcp", 00:33:54.060 "traddr": "10.0.0.2", 00:33:54.060 "adrfam": "ipv4", 00:33:54.060 "trsvcid": "4420", 00:33:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:54.060 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:54.060 "hdgst": false, 00:33:54.060 "ddgst": false 00:33:54.060 }, 00:33:54.060 "method": "bdev_nvme_attach_controller" 00:33:54.060 }' 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:54.060 17:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:54.060 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.060 ... 00:33:54.060 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.060 ... 00:33:54.060 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:54.060 ... 00:33:54.060 fio-3.35 00:33:54.060 Starting 24 threads 00:34:06.258 00:34:06.258 filename0: (groupid=0, jobs=1): err= 0: pid=2755158: Wed Nov 20 17:28:22 2024 00:34:06.258 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.4MiB/10016msec) 00:34:06.258 slat (nsec): min=7395, max=90494, avg=24409.27, stdev=17559.23 00:34:06.258 clat (usec): min=11146, max=30924, avg=26602.26, stdev=2132.67 00:34:06.258 lat (usec): min=11159, max=30947, avg=26626.67, stdev=2131.92 00:34:06.258 clat percentiles (usec): 00:34:06.258 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:06.258 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:34:06.258 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30278], 95.00th=[30540], 00:34:06.258 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:34:06.258 | 99.99th=[30802] 00:34:06.258 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2386.95, stdev=126.92, samples=20 00:34:06.258 iops : min= 542, max= 640, avg=596.70, stdev=31.80, samples=20 00:34:06.258 lat (msec) : 20=0.53%, 50=99.47% 00:34:06.258 cpu : usr=98.19%, sys=1.15%, ctx=145, majf=0, minf=64 00:34:06.258 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.258 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.258 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.258 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.258 filename0: (groupid=0, jobs=1): err= 0: pid=2755159: Wed Nov 20 17:28:22 2024 00:34:06.258 read: IOPS=596, BW=2385KiB/s (2443kB/s)(23.3MiB/10008msec) 00:34:06.258 slat (nsec): min=5623, max=94228, avg=48828.01, stdev=17815.49 00:34:06.258 clat (usec): min=8548, max=41499, avg=26376.44, stdev=2288.18 00:34:06.258 lat (usec): min=8563, max=41518, avg=26425.27, stdev=2290.51 00:34:06.258 clat percentiles (usec): 00:34:06.258 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:06.258 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.258 | 70.00th=[26608], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.258 | 99.00th=[30540], 99.50th=[30802], 99.90th=[41681], 99.95th=[41681], 00:34:06.258 | 99.99th=[41681] 00:34:06.258 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.32, stdev=137.13, samples=19 00:34:06.258 iops : min= 544, max= 640, avg=594.58, stdev=34.28, samples=19 00:34:06.258 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:06.258 cpu : usr=98.20%, sys=1.26%, ctx=70, majf=0, minf=25 00:34:06.258 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.258 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.258 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.258 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.258 filename0: (groupid=0, jobs=1): err= 0: pid=2755160: Wed Nov 20 17:28:22 2024 00:34:06.258 read: IOPS=596, BW=2385KiB/s (2443kB/s)(23.3MiB/10008msec) 00:34:06.258 slat (nsec): min=6616, max=94091, avg=49832.34, stdev=17089.98 00:34:06.258 clat (usec): min=8470, max=41400, avg=26381.10, stdev=2287.64 00:34:06.258 lat (usec): min=8484, max=41421, avg=26430.93, stdev=2289.71 00:34:06.258 clat percentiles (usec): 00:34:06.258 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:06.258 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.258 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.258 | 99.00th=[30540], 99.50th=[30802], 99.90th=[41157], 99.95th=[41157], 00:34:06.258 | 99.99th=[41157] 00:34:06.258 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.32, stdev=137.13, samples=19 00:34:06.258 iops : min= 544, max= 640, avg=594.58, stdev=34.28, samples=19 00:34:06.258 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:06.259 cpu : usr=98.96%, sys=0.67%, ctx=13, majf=0, minf=32 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename0: (groupid=0, jobs=1): err= 0: pid=2755161: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=596, BW=2385KiB/s (2442kB/s)(23.3MiB/10007msec) 00:34:06.259 slat (nsec): min=6740, max=96525, avg=41724.23, stdev=22431.30 00:34:06.259 clat (usec): min=7513, max=47548, avg=26416.43, stdev=2446.77 00:34:06.259 lat (usec): min=7522, max=47570, avg=26458.15, stdev=2449.43 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.259 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47449], 99.95th=[47449], 00:34:06.259 | 99.99th=[47449] 00:34:06.259 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.11, stdev=130.23, samples=19 00:34:06.259 iops : min= 544, max= 640, avg=594.53, stdev=32.56, samples=19 00:34:06.259 lat (msec) : 10=0.23%, 20=0.30%, 50=99.46% 00:34:06.259 cpu : usr=97.96%, sys=1.33%, ctx=225, majf=0, minf=24 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename0: (groupid=0, jobs=1): err= 0: pid=2755162: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=596, BW=2384KiB/s (2442kB/s)(23.3MiB/10012msec) 00:34:06.259 slat (nsec): min=7613, max=95275, avg=50250.06, stdev=16572.20 00:34:06.259 clat (usec): min=18791, max=31076, avg=26406.58, stdev=1967.51 00:34:06.259 lat (usec): min=18851, max=31096, avg=26456.83, stdev=1969.32 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.259 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:34:06.259 | 99.99th=[31065] 00:34:06.259 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2384.84, stdev=136.34, samples=19 00:34:06.259 iops : min= 512, max= 640, avg=596.21, stdev=34.08, samples=19 00:34:06.259 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.259 cpu : usr=98.74%, sys=0.90%, ctx=35, majf=0, minf=24 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename0: (groupid=0, jobs=1): err= 0: pid=2755163: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10017msec) 00:34:06.259 slat (nsec): min=7325, max=92418, avg=34156.90, stdev=19281.01 00:34:06.259 clat (usec): min=19454, max=37031, avg=26515.78, stdev=2003.56 00:34:06.259 lat (usec): min=19476, max=37059, avg=26549.94, stdev=2006.49 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26346], 00:34:06.259 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30802], 99.90th=[36963], 99.95th=[36963], 00:34:06.259 | 99.99th=[36963] 00:34:06.259 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2381.00, stdev=134.01, samples=20 00:34:06.259 iops : min= 512, max= 672, avg=595.25, stdev=33.50, samples=20 00:34:06.259 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.259 cpu : usr=98.46%, sys=0.97%, ctx=67, majf=0, minf=44 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename0: (groupid=0, jobs=1): err= 0: pid=2755164: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10018msec) 00:34:06.259 slat (nsec): min=7047, max=89078, avg=37022.28, stdev=20062.99 00:34:06.259 clat (usec): min=17483, max=37654, avg=26504.23, stdev=2024.20 00:34:06.259 lat (usec): min=17497, max=37674, avg=26541.25, stdev=2027.94 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:34:06.259 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30802], 99.90th=[37487], 99.95th=[37487], 00:34:06.259 | 99.99th=[37487] 00:34:06.259 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2380.80, stdev=133.93, samples=20 00:34:06.259 iops : min= 512, max= 672, avg=595.20, stdev=33.48, samples=20 00:34:06.259 lat (msec) : 20=0.34%, 50=99.66% 00:34:06.259 cpu : usr=98.69%, sys=0.93%, ctx=21, majf=0, minf=38 00:34:06.259 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename0: (groupid=0, jobs=1): err= 0: pid=2755165: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.4MiB/10016msec) 00:34:06.259 slat (usec): min=6, max=195, avg=44.31, stdev=16.22 00:34:06.259 clat (usec): min=9142, max=30855, avg=26420.76, stdev=2163.20 00:34:06.259 lat (usec): min=9152, max=30887, avg=26465.06, stdev=2164.86 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:34:06.259 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:34:06.259 | 99.99th=[30802] 00:34:06.259 bw ( KiB/s): min= 2171, max= 2565, per=4.17%, avg=2387.20, stdev=127.28, samples=20 00:34:06.259 iops : min= 542, max= 641, avg=596.75, stdev=31.87, samples=20 00:34:06.259 lat (msec) : 10=0.27%, 20=0.27%, 50=99.47% 00:34:06.259 cpu : usr=98.64%, sys=0.96%, ctx=24, majf=0, minf=34 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename1: (groupid=0, jobs=1): err= 0: pid=2755166: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=597, BW=2391KiB/s (2449kB/s)(23.4MiB/10009msec) 00:34:06.259 slat (nsec): min=6348, max=83279, avg=19820.11, stdev=13893.84 00:34:06.259 clat (usec): min=10704, max=30953, avg=26606.53, stdev=2178.92 00:34:06.259 lat (usec): min=10720, max=30978, avg=26626.35, stdev=2178.78 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:06.259 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26608], 00:34:06.259 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30278], 95.00th=[30540], 00:34:06.259 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:34:06.259 | 99.99th=[31065] 00:34:06.259 bw ( KiB/s): min= 2171, max= 2688, per=4.18%, avg=2391.32, stdev=154.54, samples=19 00:34:06.259 iops : min= 542, max= 672, avg=597.79, stdev=38.69, samples=19 00:34:06.259 lat (msec) : 20=0.80%, 50=99.20% 00:34:06.259 cpu : usr=98.79%, sys=0.82%, ctx=31, majf=0, minf=28 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename1: (groupid=0, jobs=1): err= 0: pid=2755167: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=596, BW=2384KiB/s (2442kB/s)(23.3MiB/10012msec) 00:34:06.259 slat (usec): min=7, max=104, avg=46.45, stdev=15.03 00:34:06.259 clat (usec): min=17236, max=31282, avg=26449.75, stdev=1958.02 00:34:06.259 lat (usec): min=17245, max=31307, avg=26496.21, stdev=1961.17 00:34:06.259 clat percentiles (usec): 00:34:06.259 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:34:06.259 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.259 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.259 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:34:06.259 | 99.99th=[31327] 00:34:06.259 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2384.84, stdev=136.34, samples=19 00:34:06.259 iops : min= 512, max= 640, avg=596.21, stdev=34.08, samples=19 00:34:06.259 lat (msec) : 20=0.30%, 50=99.70% 00:34:06.259 cpu : usr=98.74%, sys=0.87%, ctx=37, majf=0, minf=26 00:34:06.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.259 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.259 filename1: (groupid=0, jobs=1): err= 0: pid=2755168: Wed Nov 20 17:28:22 2024 00:34:06.259 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10018msec) 00:34:06.259 slat (nsec): min=7657, max=96608, avg=41667.82, stdev=21826.01 00:34:06.259 clat (usec): min=15400, max=39854, avg=26475.63, stdev=2077.88 00:34:06.259 lat (usec): min=15411, max=39879, avg=26517.30, stdev=2080.63 00:34:06.259 clat percentiles (usec): 00:34:06.260 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.260 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.260 | 99.00th=[30802], 99.50th=[30802], 99.90th=[39584], 99.95th=[39584], 00:34:06.260 | 99.99th=[40109] 00:34:06.260 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2384.84, stdev=136.34, samples=19 00:34:06.260 iops : min= 512, max= 640, avg=596.21, stdev=34.08, samples=19 00:34:06.260 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.260 cpu : usr=98.15%, sys=1.14%, ctx=141, majf=0, minf=36 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename1: (groupid=0, jobs=1): err= 0: pid=2755169: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=597, BW=2392KiB/s (2449kB/s)(23.4MiB/10012msec) 00:34:06.260 slat (nsec): min=5532, max=89062, avg=43031.09, stdev=16799.68 00:34:06.260 clat (usec): min=16534, max=37985, avg=26378.96, stdev=2372.63 00:34:06.260 lat (usec): min=16543, max=38026, avg=26421.99, stdev=2378.44 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[17171], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.260 | 70.00th=[26870], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278], 00:34:06.260 | 99.00th=[30540], 99.50th=[33162], 99.90th=[37487], 99.95th=[38011], 00:34:06.260 | 99.99th=[38011] 00:34:06.260 bw ( KiB/s): min= 2048, max= 2832, per=4.18%, avg=2392.42, stdev=173.18, samples=19 00:34:06.260 iops : min= 512, max= 708, avg=598.11, stdev=43.29, samples=19 00:34:06.260 lat (msec) : 20=1.60%, 50=98.40% 00:34:06.260 cpu : usr=98.63%, sys=0.98%, ctx=40, majf=0, minf=28 00:34:06.260 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename1: (groupid=0, jobs=1): err= 0: pid=2755170: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=595, BW=2383KiB/s (2441kB/s)(23.3MiB/10016msec) 00:34:06.260 slat (nsec): min=6365, max=82502, avg=26821.78, stdev=16320.82 00:34:06.260 clat (usec): min=15282, max=38111, avg=26645.07, stdev=2068.95 00:34:06.260 lat (usec): min=15295, max=38134, avg=26671.89, stdev=2069.83 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:34:06.260 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:34:06.260 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:34:06.260 | 99.00th=[30802], 99.50th=[30802], 99.90th=[38011], 99.95th=[38011], 00:34:06.260 | 99.99th=[38011] 00:34:06.260 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2384.84, stdev=122.26, samples=19 00:34:06.260 iops : min= 544, max= 640, avg=596.21, stdev=30.56, samples=19 00:34:06.260 lat (msec) : 20=0.30%, 50=99.70% 00:34:06.260 cpu : usr=98.60%, sys=0.97%, ctx=53, majf=0, minf=33 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename1: (groupid=0, jobs=1): err= 0: pid=2755171: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=596, BW=2384KiB/s (2442kB/s)(23.3MiB/10012msec) 00:34:06.260 slat (nsec): min=7681, max=90983, avg=47156.47, stdev=16821.68 00:34:06.260 clat (usec): min=19014, max=30979, avg=26451.37, stdev=1965.75 00:34:06.260 lat (usec): min=19057, max=31002, avg=26498.53, stdev=1967.15 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:34:06.260 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.260 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:34:06.260 | 99.99th=[31065] 00:34:06.260 bw ( KiB/s): min= 2048, max= 2560, per=4.17%, avg=2384.84, stdev=136.34, samples=19 00:34:06.260 iops : min= 512, max= 640, avg=596.21, stdev=34.08, samples=19 00:34:06.260 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.260 cpu : usr=97.71%, sys=1.45%, ctx=341, majf=0, minf=45 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename1: (groupid=0, jobs=1): err= 0: pid=2755172: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=596, BW=2385KiB/s (2443kB/s)(23.3MiB/10008msec) 00:34:06.260 slat (nsec): min=6351, max=86062, avg=37215.81, stdev=17002.70 00:34:06.260 clat (usec): min=7588, max=47863, avg=26487.57, stdev=2480.64 00:34:06.260 lat (usec): min=7622, max=47886, avg=26524.78, stdev=2482.08 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:06.260 | 70.00th=[26870], 80.00th=[28181], 90.00th=[30016], 95.00th=[30278], 00:34:06.260 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47973], 99.95th=[47973], 00:34:06.260 | 99.99th=[47973] 00:34:06.260 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.32, stdev=136.92, samples=19 00:34:06.260 iops : min= 544, max= 640, avg=594.58, stdev=34.23, samples=19 00:34:06.260 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:06.260 cpu : usr=98.77%, sys=0.80%, ctx=21, majf=0, minf=26 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename1: (groupid=0, jobs=1): err= 0: pid=2755173: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10017msec) 00:34:06.260 slat (nsec): min=6611, max=88410, avg=34735.17, stdev=18723.44 00:34:06.260 clat (usec): min=15767, max=40556, avg=26520.03, stdev=2008.04 00:34:06.260 lat (usec): min=15782, max=40582, avg=26554.77, stdev=2011.16 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:06.260 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.260 | 99.00th=[30540], 99.50th=[30802], 99.90th=[36439], 99.95th=[36439], 00:34:06.260 | 99.99th=[40633] 00:34:06.260 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2381.00, stdev=134.01, samples=20 00:34:06.260 iops : min= 512, max= 672, avg=595.25, stdev=33.50, samples=20 00:34:06.260 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.260 cpu : usr=98.86%, sys=0.75%, ctx=52, majf=0, minf=23 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename2: (groupid=0, jobs=1): err= 0: pid=2755174: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10017msec) 00:34:06.260 slat (nsec): min=7492, max=89042, avg=36253.29, stdev=20351.64 00:34:06.260 clat (usec): min=15784, max=40603, avg=26490.19, stdev=2003.20 00:34:06.260 lat (usec): min=15800, max=40629, avg=26526.45, stdev=2006.95 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.260 | 70.00th=[26870], 80.00th=[28181], 90.00th=[30016], 95.00th=[30278], 00:34:06.260 | 99.00th=[30540], 99.50th=[30802], 99.90th=[36439], 99.95th=[36439], 00:34:06.260 | 99.99th=[40633] 00:34:06.260 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2381.00, stdev=134.01, samples=20 00:34:06.260 iops : min= 512, max= 672, avg=595.25, stdev=33.50, samples=20 00:34:06.260 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.260 cpu : usr=98.45%, sys=1.02%, ctx=127, majf=0, minf=31 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.260 filename2: (groupid=0, jobs=1): err= 0: pid=2755175: Wed Nov 20 17:28:22 2024 00:34:06.260 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10017msec) 00:34:06.260 slat (nsec): min=7092, max=96124, avg=37156.65, stdev=21997.31 00:34:06.260 clat (usec): min=17133, max=38123, avg=26547.12, stdev=2052.88 00:34:06.260 lat (usec): min=17148, max=38142, avg=26584.27, stdev=2053.99 00:34:06.260 clat percentiles (usec): 00:34:06.260 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.260 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:34:06.260 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.260 | 99.00th=[30802], 99.50th=[30802], 99.90th=[38011], 99.95th=[38011], 00:34:06.260 | 99.99th=[38011] 00:34:06.260 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2384.84, stdev=122.26, samples=19 00:34:06.260 iops : min= 544, max= 640, avg=596.21, stdev=30.56, samples=19 00:34:06.260 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.260 cpu : usr=98.36%, sys=1.11%, ctx=97, majf=0, minf=29 00:34:06.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755176: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.4MiB/10015msec) 00:34:06.261 slat (nsec): min=6757, max=91560, avg=37271.72, stdev=19463.49 00:34:06.261 clat (usec): min=8056, max=30910, avg=26497.49, stdev=2130.24 00:34:06.261 lat (usec): min=8066, max=30935, avg=26534.76, stdev=2131.59 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:34:06.261 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:34:06.261 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:34:06.261 | 99.99th=[30802] 00:34:06.261 bw ( KiB/s): min= 2171, max= 2565, per=4.17%, avg=2387.20, stdev=127.28, samples=20 00:34:06.261 iops : min= 542, max= 641, avg=596.75, stdev=31.87, samples=20 00:34:06.261 lat (msec) : 10=0.03%, 20=0.53%, 50=99.43% 00:34:06.261 cpu : usr=98.55%, sys=1.07%, ctx=46, majf=0, minf=33 00:34:06.261 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755177: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=596, BW=2385KiB/s (2443kB/s)(23.3MiB/10008msec) 00:34:06.261 slat (nsec): min=7964, max=96090, avg=43175.30, stdev=21477.42 00:34:06.261 clat (usec): min=7510, max=47977, avg=26425.90, stdev=2478.13 00:34:06.261 lat (usec): min=7524, max=47998, avg=26469.08, stdev=2480.55 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.261 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.261 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47973], 99.95th=[47973], 00:34:06.261 | 99.99th=[47973] 00:34:06.261 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.32, stdev=136.92, samples=19 00:34:06.261 iops : min= 544, max= 640, avg=594.58, stdev=34.23, samples=19 00:34:06.261 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:06.261 cpu : usr=98.24%, sys=1.14%, ctx=137, majf=0, minf=38 00:34:06.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755178: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=597, BW=2391KiB/s (2449kB/s)(23.4MiB/10009msec) 00:34:06.261 slat (nsec): min=6731, max=83472, avg=30509.31, stdev=17682.84 00:34:06.261 clat (usec): min=10676, max=31000, avg=26508.12, stdev=2172.40 00:34:06.261 lat (usec): min=10684, max=31020, avg=26538.63, stdev=2173.89 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23200], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.261 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:34:06.261 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[31065], 00:34:06.261 | 99.99th=[31065] 00:34:06.261 bw ( KiB/s): min= 2171, max= 2688, per=4.18%, avg=2391.32, stdev=154.54, samples=19 00:34:06.261 iops : min= 542, max= 672, avg=597.79, stdev=38.69, samples=19 00:34:06.261 lat (msec) : 20=0.80%, 50=99.20% 00:34:06.261 cpu : usr=98.58%, sys=1.04%, ctx=22, majf=0, minf=38 00:34:06.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755179: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=595, BW=2383KiB/s (2440kB/s)(23.3MiB/10017msec) 00:34:06.261 slat (nsec): min=6836, max=92813, avg=37075.03, stdev=20165.64 00:34:06.261 clat (usec): min=19425, max=37064, avg=26489.66, stdev=1997.21 00:34:06.261 lat (usec): min=19446, max=37085, avg=26526.73, stdev=2001.02 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.261 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.261 | 70.00th=[26870], 80.00th=[28181], 90.00th=[30016], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[36963], 99.95th=[36963], 00:34:06.261 | 99.99th=[36963] 00:34:06.261 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2380.80, stdev=133.93, samples=20 00:34:06.261 iops : min= 512, max= 672, avg=595.20, stdev=33.48, samples=20 00:34:06.261 lat (msec) : 20=0.27%, 50=99.73% 00:34:06.261 cpu : usr=97.78%, sys=1.46%, ctx=161, majf=0, minf=27 00:34:06.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755180: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10007msec) 00:34:06.261 slat (nsec): min=7099, max=96264, avg=42687.88, stdev=21642.13 00:34:06.261 clat (usec): min=7626, max=56157, avg=26416.62, stdev=2499.79 00:34:06.261 lat (usec): min=7679, max=56179, avg=26459.31, stdev=2502.40 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23462], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:34:06.261 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.261 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[47449], 99.95th=[47449], 00:34:06.261 | 99.99th=[56361] 00:34:06.261 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.11, stdev=130.23, samples=19 00:34:06.261 iops : min= 544, max= 640, avg=594.53, stdev=32.56, samples=19 00:34:06.261 lat (msec) : 10=0.27%, 20=0.30%, 50=99.40%, 100=0.03% 00:34:06.261 cpu : usr=98.12%, sys=1.24%, ctx=124, majf=0, minf=38 00:34:06.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 filename2: (groupid=0, jobs=1): err= 0: pid=2755181: Wed Nov 20 17:28:22 2024 00:34:06.261 read: IOPS=596, BW=2385KiB/s (2442kB/s)(23.3MiB/10009msec) 00:34:06.261 slat (usec): min=6, max=262, avg=48.36, stdev=17.51 00:34:06.261 clat (usec): min=8715, max=42730, avg=26382.38, stdev=2302.62 00:34:06.261 lat (usec): min=8728, max=42763, avg=26430.73, stdev=2305.63 00:34:06.261 clat percentiles (usec): 00:34:06.261 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:06.261 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:06.261 | 70.00th=[26870], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:34:06.261 | 99.00th=[30540], 99.50th=[30802], 99.90th=[42730], 99.95th=[42730], 00:34:06.261 | 99.99th=[42730] 00:34:06.261 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2378.11, stdev=137.04, samples=19 00:34:06.261 iops : min= 544, max= 640, avg=594.53, stdev=34.26, samples=19 00:34:06.261 lat (msec) : 10=0.27%, 20=0.27%, 50=99.46% 00:34:06.261 cpu : usr=98.57%, sys=0.97%, ctx=52, majf=0, minf=29 00:34:06.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:06.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.261 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:06.261 00:34:06.261 Run status group 0 (all jobs): 00:34:06.261 READ: bw=55.9MiB/s (58.6MB/s), 2383KiB/s-2392KiB/s (2440kB/s-2449kB/s), io=560MiB (587MB), run=10007-10018msec 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.261 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 bdev_null0 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 [2024-11-20 17:28:23.260347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 bdev_null1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:06.262 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.262 { 00:34:06.262 "params": { 00:34:06.262 "name": "Nvme$subsystem", 00:34:06.262 "trtype": "$TEST_TRANSPORT", 00:34:06.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.262 "adrfam": "ipv4", 00:34:06.263 "trsvcid": "$NVMF_PORT", 00:34:06.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.263 "hdgst": ${hdgst:-false}, 00:34:06.263 "ddgst": ${ddgst:-false} 00:34:06.263 }, 00:34:06.263 "method": "bdev_nvme_attach_controller" 00:34:06.263 } 00:34:06.263 EOF 00:34:06.263 )") 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:06.263 { 00:34:06.263 "params": { 00:34:06.263 "name": "Nvme$subsystem", 00:34:06.263 "trtype": "$TEST_TRANSPORT", 00:34:06.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:06.263 "adrfam": "ipv4", 00:34:06.263 "trsvcid": "$NVMF_PORT", 00:34:06.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:06.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:06.263 "hdgst": ${hdgst:-false}, 00:34:06.263 "ddgst": ${ddgst:-false} 00:34:06.263 }, 00:34:06.263 "method": "bdev_nvme_attach_controller" 00:34:06.263 } 00:34:06.263 EOF 00:34:06.263 )") 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:06.263 "params": { 00:34:06.263 "name": "Nvme0", 00:34:06.263 "trtype": "tcp", 00:34:06.263 "traddr": "10.0.0.2", 00:34:06.263 "adrfam": "ipv4", 00:34:06.263 "trsvcid": "4420", 00:34:06.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:06.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:06.263 "hdgst": false, 00:34:06.263 "ddgst": false 00:34:06.263 }, 00:34:06.263 "method": "bdev_nvme_attach_controller" 00:34:06.263 },{ 00:34:06.263 "params": { 00:34:06.263 "name": "Nvme1", 00:34:06.263 "trtype": "tcp", 00:34:06.263 "traddr": "10.0.0.2", 00:34:06.263 "adrfam": "ipv4", 00:34:06.263 "trsvcid": "4420", 00:34:06.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:06.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:06.263 "hdgst": false, 00:34:06.263 "ddgst": false 00:34:06.263 }, 00:34:06.263 "method": "bdev_nvme_attach_controller" 00:34:06.263 }' 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:06.263 17:28:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:06.263 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:06.263 ... 00:34:06.263 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:06.263 ... 00:34:06.263 fio-3.35 00:34:06.263 Starting 4 threads 00:34:11.537 00:34:11.537 filename0: (groupid=0, jobs=1): err= 0: pid=2757122: Wed Nov 20 17:28:29 2024 00:34:11.537 read: IOPS=2667, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:34:11.537 slat (nsec): min=6023, max=47276, avg=9559.16, stdev=4016.95 00:34:11.537 clat (usec): min=541, max=6140, avg=2971.64, stdev=501.22 00:34:11.537 lat (usec): min=548, max=6148, avg=2981.20, stdev=500.98 00:34:11.537 clat percentiles (usec): 00:34:11.537 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:34:11.537 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:34:11.537 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3851], 00:34:11.537 | 99.00th=[ 4621], 99.50th=[ 5014], 99.90th=[ 5473], 99.95th=[ 5604], 00:34:11.537 | 99.99th=[ 6128] 00:34:11.537 bw ( KiB/s): min=20272, max=22848, per=25.39%, avg=21334.33, stdev=881.36, samples=9 00:34:11.537 iops : min= 2534, max= 2856, avg=2666.78, stdev=110.18, samples=9 00:34:11.537 lat (usec) : 750=0.02% 00:34:11.537 lat (msec) : 2=1.45%, 4=95.17%, 10=3.36% 00:34:11.537 cpu : usr=96.56%, sys=3.10%, ctx=8, majf=0, minf=9 00:34:11.537 IO depths : 1=0.4%, 2=3.8%, 4=67.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 issued rwts: total=13338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.537 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.537 filename0: (groupid=0, jobs=1): err= 0: pid=2757123: Wed Nov 20 17:28:29 2024 00:34:11.537 read: IOPS=2556, BW=20.0MiB/s (20.9MB/s)(99.9MiB/5002msec) 00:34:11.537 slat (nsec): min=6024, max=42945, avg=9464.58, stdev=3716.96 00:34:11.537 clat (usec): min=561, max=5778, avg=3099.87, stdev=465.51 00:34:11.537 lat (usec): min=573, max=5784, avg=3109.33, stdev=465.04 00:34:11.537 clat percentiles (usec): 00:34:11.537 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802], 00:34:11.537 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3130], 00:34:11.537 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 3949], 00:34:11.537 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5342], 00:34:11.537 | 99.99th=[ 5800] 00:34:11.537 bw ( KiB/s): min=19520, max=21616, per=24.51%, avg=20599.11, stdev=715.05, samples=9 00:34:11.537 iops : min= 2440, max= 2702, avg=2574.89, stdev=89.38, samples=9 00:34:11.537 lat (usec) : 750=0.02%, 1000=0.02% 00:34:11.537 lat (msec) : 2=0.52%, 4=94.98%, 10=4.47% 00:34:11.537 cpu : usr=96.44%, sys=3.24%, ctx=8, majf=0, minf=9 00:34:11.537 IO depths : 1=0.2%, 2=5.1%, 4=67.7%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 issued rwts: total=12789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.537 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.537 filename1: (groupid=0, jobs=1): err= 0: pid=2757124: Wed Nov 20 17:28:29 2024 00:34:11.537 read: IOPS=2473, BW=19.3MiB/s (20.3MB/s)(96.6MiB/5001msec) 00:34:11.537 slat (nsec): min=6040, max=43537, avg=9210.14, stdev=3687.37 00:34:11.537 clat (usec): min=734, max=5818, avg=3207.59, stdev=449.32 00:34:11.537 lat (usec): min=761, max=5825, avg=3216.80, stdev=448.84 00:34:11.537 clat percentiles (usec): 00:34:11.537 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2900], 00:34:11.537 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3195], 00:34:11.537 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 3785], 95.00th=[ 4015], 00:34:11.537 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:11.537 | 99.99th=[ 5800] 00:34:11.537 bw ( KiB/s): min=19104, max=20656, per=23.66%, avg=19879.11, stdev=629.37, samples=9 00:34:11.537 iops : min= 2388, max= 2582, avg=2484.89, stdev=78.67, samples=9 00:34:11.537 lat (usec) : 750=0.01% 00:34:11.537 lat (msec) : 2=0.30%, 4=94.40%, 10=5.30% 00:34:11.537 cpu : usr=95.96%, sys=3.46%, ctx=152, majf=0, minf=9 00:34:11.537 IO depths : 1=0.1%, 2=1.7%, 4=72.0%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 issued rwts: total=12369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.537 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.537 filename1: (groupid=0, jobs=1): err= 0: pid=2757125: Wed Nov 20 17:28:29 2024 00:34:11.537 read: IOPS=2809, BW=21.9MiB/s (23.0MB/s)(110MiB/5003msec) 00:34:11.537 slat (nsec): min=6021, max=69835, avg=10859.93, stdev=5054.94 00:34:11.537 clat (usec): min=463, max=5390, avg=2813.45, stdev=423.44 00:34:11.537 lat (usec): min=474, max=5404, avg=2824.31, stdev=423.54 00:34:11.537 clat percentiles (usec): 00:34:11.537 | 1.00th=[ 1827], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:34:11.537 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2933], 00:34:11.537 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3523], 00:34:11.537 | 99.00th=[ 3982], 99.50th=[ 4228], 99.90th=[ 4621], 99.95th=[ 4817], 00:34:11.537 | 99.99th=[ 5276] 00:34:11.537 bw ( KiB/s): min=21616, max=23408, per=26.95%, avg=22650.67, stdev=590.86, samples=9 00:34:11.537 iops : min= 2702, max= 2926, avg=2831.33, stdev=73.86, samples=9 00:34:11.537 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:34:11.537 lat (msec) : 2=1.91%, 4=97.07%, 10=0.97% 00:34:11.537 cpu : usr=88.90%, sys=6.86%, ctx=645, majf=0, minf=9 00:34:11.537 IO depths : 1=0.5%, 2=9.8%, 4=60.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.537 issued rwts: total=14054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.537 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:11.537 00:34:11.537 Run status group 0 (all jobs): 00:34:11.537 READ: bw=82.1MiB/s (86.0MB/s), 19.3MiB/s-21.9MiB/s (20.3MB/s-23.0MB/s), io=411MiB (430MB), run=5001-5003msec 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.797 00:34:11.797 real 0m24.798s 00:34:11.797 user 4m52.354s 00:34:11.797 sys 0m5.206s 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 ************************************ 00:34:11.797 END TEST fio_dif_rand_params 00:34:11.797 ************************************ 00:34:11.797 17:28:29 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:11.797 17:28:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.797 17:28:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.797 17:28:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 ************************************ 00:34:12.057 START TEST fio_dif_digest 00:34:12.057 ************************************ 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 bdev_null0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:12.057 [2024-11-20 17:28:29.884918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.057 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:12.058 { 00:34:12.058 "params": { 00:34:12.058 "name": "Nvme$subsystem", 00:34:12.058 "trtype": "$TEST_TRANSPORT", 00:34:12.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.058 "adrfam": "ipv4", 00:34:12.058 "trsvcid": "$NVMF_PORT", 00:34:12.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.058 "hdgst": ${hdgst:-false}, 00:34:12.058 "ddgst": ${ddgst:-false} 00:34:12.058 }, 00:34:12.058 "method": "bdev_nvme_attach_controller" 00:34:12.058 } 00:34:12.058 EOF 00:34:12.058 )") 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:12.058 "params": { 00:34:12.058 "name": "Nvme0", 00:34:12.058 "trtype": "tcp", 00:34:12.058 "traddr": "10.0.0.2", 00:34:12.058 "adrfam": "ipv4", 00:34:12.058 "trsvcid": "4420", 00:34:12.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.058 "hdgst": true, 00:34:12.058 "ddgst": true 00:34:12.058 }, 00:34:12.058 "method": "bdev_nvme_attach_controller" 00:34:12.058 }' 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.058 17:28:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.317 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:12.317 ... 00:34:12.317 fio-3.35 00:34:12.317 Starting 3 threads 00:34:24.521 00:34:24.521 filename0: (groupid=0, jobs=1): err= 0: pid=2758196: Wed Nov 20 17:28:40 2024 00:34:24.521 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10044msec) 00:34:24.521 slat (usec): min=6, max=109, avg=16.07, stdev= 5.91 00:34:24.521 clat (usec): min=8067, max=52269, avg=10143.85, stdev=1201.91 00:34:24.521 lat (usec): min=8079, max=52289, avg=10159.91, stdev=1201.60 00:34:24.521 clat percentiles (usec): 00:34:24.521 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:24.521 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:34:24.521 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:34:24.521 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13042], 99.95th=[44827], 00:34:24.521 | 99.99th=[52167] 00:34:24.521 bw ( KiB/s): min=36608, max=38656, per=35.67%, avg=37875.20, stdev=635.13, samples=20 00:34:24.521 iops : min= 286, max= 302, avg=295.90, stdev= 4.96, samples=20 00:34:24.521 lat (msec) : 10=42.62%, 20=57.31%, 50=0.03%, 100=0.03% 00:34:24.521 cpu : usr=95.52%, sys=4.17%, ctx=26, majf=0, minf=63 00:34:24.521 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.521 issued rwts: total=2961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.521 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.521 filename0: (groupid=0, jobs=1): err= 0: pid=2758197: Wed Nov 20 17:28:40 2024 00:34:24.521 read: IOPS=273, BW=34.1MiB/s (35.8MB/s)(343MiB/10044msec) 00:34:24.521 slat (nsec): min=6296, max=50518, avg=15919.12, stdev=7392.40 00:34:24.521 clat (usec): min=7313, max=47437, avg=10952.28, stdev=1202.56 00:34:24.522 lat (usec): min=7327, max=47460, avg=10968.20, stdev=1202.64 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:34:24.522 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:34:24.522 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:24.522 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13829], 99.95th=[45351], 00:34:24.522 | 99.99th=[47449] 00:34:24.522 bw ( KiB/s): min=34304, max=35840, per=33.04%, avg=35081.30, stdev=405.18, samples=20 00:34:24.522 iops : min= 268, max= 280, avg=274.05, stdev= 3.19, samples=20 00:34:24.522 lat (msec) : 10=10.17%, 20=89.76%, 50=0.07% 00:34:24.522 cpu : usr=95.77%, sys=3.93%, ctx=24, majf=0, minf=91 00:34:24.522 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.522 filename0: (groupid=0, jobs=1): err= 0: pid=2758198: Wed Nov 20 17:28:40 2024 00:34:24.522 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(329MiB/10045msec) 00:34:24.522 slat (nsec): min=6062, max=55182, avg=14953.36, stdev=7022.72 00:34:24.522 clat (usec): min=8470, max=51561, avg=11433.17, stdev=1266.54 00:34:24.522 lat (usec): min=8482, max=51571, avg=11448.12, stdev=1266.62 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:34:24.522 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:34:24.522 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:34:24.522 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14877], 99.95th=[45876], 00:34:24.522 | 99.99th=[51643] 00:34:24.522 bw ( KiB/s): min=32702, max=34816, per=31.66%, avg=33609.50, stdev=538.16, samples=20 00:34:24.522 iops : min= 255, max= 272, avg=262.55, stdev= 4.25, samples=20 00:34:24.522 lat (msec) : 10=2.51%, 20=97.41%, 50=0.04%, 100=0.04% 00:34:24.522 cpu : usr=95.66%, sys=4.03%, ctx=23, majf=0, minf=25 00:34:24.522 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:24.522 00:34:24.522 Run status group 0 (all jobs): 00:34:24.522 READ: bw=104MiB/s (109MB/s), 32.7MiB/s-36.8MiB/s (34.3MB/s-38.6MB/s), io=1042MiB (1092MB), run=10044-10045msec 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.522 00:34:24.522 real 0m11.093s 00:34:24.522 user 0m35.365s 00:34:24.522 sys 0m1.557s 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.522 17:28:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:24.522 ************************************ 00:34:24.522 END TEST fio_dif_digest 00:34:24.522 ************************************ 00:34:24.522 17:28:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:24.522 17:28:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.522 17:28:40 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.522 rmmod nvme_tcp 00:34:24.522 rmmod nvme_fabrics 00:34:24.522 rmmod nvme_keyring 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2749771 ']' 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2749771 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2749771 ']' 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2749771 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2749771 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2749771' 00:34:24.522 killing process with pid 2749771 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2749771 00:34:24.522 17:28:41 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2749771 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:24.522 17:28:41 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:25.902 Waiting for block devices as requested 00:34:26.161 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:26.161 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:26.161 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:26.419 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:26.419 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:26.419 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:26.679 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:26.679 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:26.679 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:26.679 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:26.938 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:26.938 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:26.938 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:27.197 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:27.197 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:27.197 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:27.456 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.456 17:28:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.456 17:28:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:27.456 17:28:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.993 17:28:47 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.993 00:34:29.993 real 1m14.659s 00:34:29.993 user 7m10.305s 00:34:29.993 sys 0m20.556s 00:34:29.993 17:28:47 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.993 17:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:29.993 ************************************ 00:34:29.993 END TEST nvmf_dif 00:34:29.993 ************************************ 00:34:29.993 17:28:47 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:29.993 17:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:29.993 17:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.993 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:34:29.993 ************************************ 00:34:29.993 START TEST nvmf_abort_qd_sizes 00:34:29.993 ************************************ 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:29.993 * Looking for test storage... 00:34:29.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:29.993 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.994 --rc genhtml_branch_coverage=1 00:34:29.994 --rc genhtml_function_coverage=1 00:34:29.994 --rc genhtml_legend=1 00:34:29.994 --rc geninfo_all_blocks=1 00:34:29.994 --rc geninfo_unexecuted_blocks=1 00:34:29.994 00:34:29.994 ' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.994 --rc genhtml_branch_coverage=1 00:34:29.994 --rc genhtml_function_coverage=1 00:34:29.994 --rc genhtml_legend=1 00:34:29.994 --rc geninfo_all_blocks=1 00:34:29.994 --rc geninfo_unexecuted_blocks=1 00:34:29.994 00:34:29.994 ' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.994 --rc genhtml_branch_coverage=1 00:34:29.994 --rc genhtml_function_coverage=1 00:34:29.994 --rc genhtml_legend=1 00:34:29.994 --rc geninfo_all_blocks=1 00:34:29.994 --rc geninfo_unexecuted_blocks=1 00:34:29.994 00:34:29.994 ' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:29.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.994 --rc genhtml_branch_coverage=1 00:34:29.994 --rc genhtml_function_coverage=1 00:34:29.994 --rc genhtml_legend=1 00:34:29.994 --rc geninfo_all_blocks=1 00:34:29.994 --rc geninfo_unexecuted_blocks=1 00:34:29.994 00:34:29.994 ' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:29.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.994 17:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:35.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:35.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.284 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:35.285 Found net devices under 0000:86:00.0: cvl_0_0 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:35.285 Found net devices under 0000:86:00.1: cvl_0_1 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.285 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:34:35.544 00:34:35.544 --- 10.0.0.2 ping statistics --- 00:34:35.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.544 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:34:35.544 00:34:35.544 --- 10.0.0.1 ping statistics --- 00:34:35.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.544 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:35.544 17:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:38.871 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:38.871 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:39.880 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2766224 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2766224 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2766224 ']' 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.138 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.138 [2024-11-20 17:28:58.140317] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:34:40.138 [2024-11-20 17:28:58.140359] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.396 [2024-11-20 17:28:58.218695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:40.396 [2024-11-20 17:28:58.261964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.396 [2024-11-20 17:28:58.262000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.396 [2024-11-20 17:28:58.262007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.396 [2024-11-20 17:28:58.262013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.396 [2024-11-20 17:28:58.262020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.396 [2024-11-20 17:28:58.263616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.396 [2024-11-20 17:28:58.263712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.396 [2024-11-20 17:28:58.263835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.396 [2024-11-20 17:28:58.263836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.396 17:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.654 ************************************ 00:34:40.654 START TEST spdk_target_abort 00:34:40.654 ************************************ 00:34:40.654 17:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:40.654 17:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:40.654 17:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:40.654 17:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.654 17:28:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.934 spdk_targetn1 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.934 [2024-11-20 17:29:01.273020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.934 [2024-11-20 17:29:01.317331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:43.934 17:29:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:47.214 Initializing NVMe Controllers 00:34:47.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:47.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:47.214 Initialization complete. Launching workers. 00:34:47.214 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15588, failed: 0 00:34:47.214 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1375, failed to submit 14213 00:34:47.214 success 732, unsuccessful 643, failed 0 00:34:47.214 17:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:47.214 17:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:50.491 Initializing NVMe Controllers 00:34:50.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:50.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:50.491 Initialization complete. Launching workers. 00:34:50.491 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8673, failed: 0 00:34:50.491 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 7399 00:34:50.491 success 301, unsuccessful 973, failed 0 00:34:50.491 17:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:50.491 17:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:53.770 Initializing NVMe Controllers 00:34:53.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:53.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:53.770 Initialization complete. Launching workers. 00:34:53.770 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38853, failed: 0 00:34:53.770 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2746, failed to submit 36107 00:34:53.770 success 590, unsuccessful 2156, failed 0 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.770 17:29:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2766224 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2766224 ']' 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2766224 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2766224 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2766224' 00:34:55.143 killing process with pid 2766224 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2766224 00:34:55.143 17:29:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2766224 00:34:55.143 00:34:55.143 real 0m14.716s 00:34:55.143 user 0m56.158s 00:34:55.143 sys 0m2.625s 00:34:55.143 17:29:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.143 17:29:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:55.143 ************************************ 00:34:55.143 END TEST spdk_target_abort 00:34:55.143 ************************************ 00:34:55.402 17:29:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:55.402 17:29:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:55.402 17:29:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.402 17:29:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:55.402 ************************************ 00:34:55.402 START TEST kernel_target_abort 00:34:55.402 ************************************ 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:55.402 17:29:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:57.939 Waiting for block devices as requested 00:34:58.198 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:58.198 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:58.198 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:58.457 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:58.457 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:58.457 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:58.716 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:58.716 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:58.716 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:58.716 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:58.975 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:58.975 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:58.975 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:59.233 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:59.233 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:59.233 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:59.233 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:59.493 No valid GPT data, bailing 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:59.493 00:34:59.493 Discovery Log Number of Records 2, Generation counter 2 00:34:59.493 =====Discovery Log Entry 0====== 00:34:59.493 trtype: tcp 00:34:59.493 adrfam: ipv4 00:34:59.493 subtype: current discovery subsystem 00:34:59.493 treq: not specified, sq flow control disable supported 00:34:59.493 portid: 1 00:34:59.493 trsvcid: 4420 00:34:59.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:59.493 traddr: 10.0.0.1 00:34:59.493 eflags: none 00:34:59.493 sectype: none 00:34:59.493 =====Discovery Log Entry 1====== 00:34:59.493 trtype: tcp 00:34:59.493 adrfam: ipv4 00:34:59.493 subtype: nvme subsystem 00:34:59.493 treq: not specified, sq flow control disable supported 00:34:59.493 portid: 1 00:34:59.493 trsvcid: 4420 00:34:59.493 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:59.493 traddr: 10.0.0.1 00:34:59.493 eflags: none 00:34:59.493 sectype: none 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.493 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:59.752 17:29:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:03.039 Initializing NVMe Controllers 00:35:03.039 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:03.039 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:03.039 Initialization complete. Launching workers. 00:35:03.039 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94888, failed: 0 00:35:03.039 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94888, failed to submit 0 00:35:03.039 success 0, unsuccessful 94888, failed 0 00:35:03.039 17:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:03.039 17:29:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:06.327 Initializing NVMe Controllers 00:35:06.327 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:06.327 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:06.327 Initialization complete. Launching workers. 00:35:06.327 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151191, failed: 0 00:35:06.327 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38194, failed to submit 112997 00:35:06.327 success 0, unsuccessful 38194, failed 0 00:35:06.327 17:29:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:06.327 17:29:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.860 Initializing NVMe Controllers 00:35:08.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:08.860 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:08.860 Initialization complete. Launching workers. 00:35:08.860 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141571, failed: 0 00:35:08.860 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35450, failed to submit 106121 00:35:08.860 success 0, unsuccessful 35450, failed 0 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:08.860 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:09.119 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.119 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:09.119 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:09.119 17:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:11.653 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:11.913 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:13.291 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:13.550 00:35:13.550 real 0m18.173s 00:35:13.550 user 0m9.130s 00:35:13.550 sys 0m5.135s 00:35:13.550 17:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.550 17:29:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:13.550 ************************************ 00:35:13.550 END TEST kernel_target_abort 00:35:13.550 ************************************ 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:13.550 rmmod nvme_tcp 00:35:13.550 rmmod nvme_fabrics 00:35:13.550 rmmod nvme_keyring 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2766224 ']' 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2766224 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2766224 ']' 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2766224 00:35:13.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2766224) - No such process 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2766224 is not found' 00:35:13.550 Process with pid 2766224 is not found 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:13.550 17:29:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:16.839 Waiting for block devices as requested 00:35:16.839 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:16.839 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:16.839 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:17.098 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:17.098 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:17.098 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:17.098 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:17.356 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:17.356 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:17.356 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:17.615 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:17.615 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.615 17:29:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.205 17:29:37 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.205 00:35:20.205 real 0m50.130s 00:35:20.205 user 1m9.687s 00:35:20.205 sys 0m16.457s 00:35:20.205 17:29:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.205 17:29:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:20.205 ************************************ 00:35:20.205 END TEST nvmf_abort_qd_sizes 00:35:20.205 ************************************ 00:35:20.205 17:29:37 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:20.205 17:29:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:20.205 17:29:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.205 17:29:37 -- common/autotest_common.sh@10 -- # set +x 00:35:20.205 ************************************ 00:35:20.205 START TEST keyring_file 00:35:20.205 ************************************ 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:20.205 * Looking for test storage... 00:35:20.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.205 --rc genhtml_branch_coverage=1 00:35:20.205 --rc genhtml_function_coverage=1 00:35:20.205 --rc genhtml_legend=1 00:35:20.205 --rc geninfo_all_blocks=1 00:35:20.205 --rc geninfo_unexecuted_blocks=1 00:35:20.205 00:35:20.205 ' 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.205 --rc genhtml_branch_coverage=1 00:35:20.205 --rc genhtml_function_coverage=1 00:35:20.205 --rc genhtml_legend=1 00:35:20.205 --rc geninfo_all_blocks=1 00:35:20.205 --rc geninfo_unexecuted_blocks=1 00:35:20.205 00:35:20.205 ' 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.205 --rc genhtml_branch_coverage=1 00:35:20.205 --rc genhtml_function_coverage=1 00:35:20.205 --rc genhtml_legend=1 00:35:20.205 --rc geninfo_all_blocks=1 00:35:20.205 --rc geninfo_unexecuted_blocks=1 00:35:20.205 00:35:20.205 ' 00:35:20.205 17:29:37 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.205 --rc genhtml_branch_coverage=1 00:35:20.205 --rc genhtml_function_coverage=1 00:35:20.205 --rc genhtml_legend=1 00:35:20.205 --rc geninfo_all_blocks=1 00:35:20.205 --rc geninfo_unexecuted_blocks=1 00:35:20.205 00:35:20.205 ' 00:35:20.205 17:29:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:20.205 17:29:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.205 17:29:37 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.205 17:29:37 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.206 17:29:37 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.206 17:29:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.206 17:29:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.206 17:29:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.206 17:29:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:20.206 17:29:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:20.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.abGGDx65zC 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.abGGDx65zC 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.abGGDx65zC 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.abGGDx65zC 00:35:20.206 17:29:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F5uHB0lNrx 00:35:20.206 17:29:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:20.206 17:29:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:20.206 17:29:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F5uHB0lNrx 00:35:20.206 17:29:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F5uHB0lNrx 00:35:20.206 17:29:38 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.F5uHB0lNrx 00:35:20.206 17:29:38 keyring_file -- keyring/file.sh@30 -- # tgtpid=2775012 00:35:20.206 17:29:38 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:20.206 17:29:38 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2775012 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2775012 ']' 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.206 17:29:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.206 [2024-11-20 17:29:38.062772] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:35:20.206 [2024-11-20 17:29:38.062821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775012 ] 00:35:20.206 [2024-11-20 17:29:38.139251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.206 [2024-11-20 17:29:38.180930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:20.465 17:29:38 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.465 [2024-11-20 17:29:38.390611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.465 null0 00:35:20.465 [2024-11-20 17:29:38.422661] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:20.465 [2024-11-20 17:29:38.422995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.465 17:29:38 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.465 [2024-11-20 17:29:38.450727] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:20.465 request: 00:35:20.465 { 00:35:20.465 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.465 "secure_channel": false, 00:35:20.465 "listen_address": { 00:35:20.465 "trtype": "tcp", 00:35:20.465 "traddr": "127.0.0.1", 00:35:20.465 "trsvcid": "4420" 00:35:20.465 }, 00:35:20.465 "method": "nvmf_subsystem_add_listener", 00:35:20.465 "req_id": 1 00:35:20.465 } 00:35:20.465 Got JSON-RPC error response 00:35:20.465 response: 00:35:20.465 { 00:35:20.465 "code": -32602, 00:35:20.465 "message": "Invalid parameters" 00:35:20.465 } 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:20.465 17:29:38 keyring_file -- keyring/file.sh@47 -- # bperfpid=2775017 00:35:20.465 17:29:38 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:20.465 17:29:38 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2775017 /var/tmp/bperf.sock 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2775017 ']' 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.465 17:29:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:20.465 [2024-11-20 17:29:38.502194] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:35:20.465 [2024-11-20 17:29:38.502242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775017 ] 00:35:20.723 [2024-11-20 17:29:38.575419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.723 [2024-11-20 17:29:38.617701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.723 17:29:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.723 17:29:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:20.723 17:29:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:20.723 17:29:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:20.982 17:29:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F5uHB0lNrx 00:35:20.982 17:29:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F5uHB0lNrx 00:35:21.240 17:29:39 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:21.240 17:29:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:21.240 17:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.240 17:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.241 17:29:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.241 17:29:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.abGGDx65zC == \/\t\m\p\/\t\m\p\.\a\b\G\G\D\x\6\5\z\C ]] 00:35:21.241 17:29:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:21.241 17:29:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:21.241 17:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.241 17:29:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.241 17:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:21.498 17:29:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.F5uHB0lNrx == \/\t\m\p\/\t\m\p\.\F\5\u\H\B\0\l\N\r\x ]] 00:35:21.498 17:29:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:21.498 17:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.498 17:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.498 17:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.498 17:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.498 17:29:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.755 17:29:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:21.755 17:29:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:21.755 17:29:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:21.755 17:29:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.755 17:29:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:21.755 17:29:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.755 17:29:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.012 17:29:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:22.012 17:29:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.012 17:29:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.012 [2024-11-20 17:29:40.033727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:22.269 nvme0n1 00:35:22.269 17:29:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:22.269 17:29:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:22.269 17:29:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.269 17:29:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.269 17:29:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.269 17:29:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.528 17:29:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:22.528 17:29:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:22.528 17:29:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:22.528 17:29:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.528 17:29:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.528 17:29:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:22.528 17:29:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.528 17:29:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:22.528 17:29:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:22.786 Running I/O for 1 seconds... 00:35:23.724 19190.00 IOPS, 74.96 MiB/s 00:35:23.724 Latency(us) 00:35:23.724 [2024-11-20T16:29:41.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.724 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:23.724 nvme0n1 : 1.00 19239.79 75.16 0.00 0.00 6641.34 2481.01 17975.59 00:35:23.724 [2024-11-20T16:29:41.767Z] =================================================================================================================== 00:35:23.724 [2024-11-20T16:29:41.767Z] Total : 19239.79 75.16 0.00 0.00 6641.34 2481.01 17975.59 00:35:23.724 { 00:35:23.724 "results": [ 00:35:23.724 { 00:35:23.724 "job": "nvme0n1", 00:35:23.724 "core_mask": "0x2", 00:35:23.724 "workload": "randrw", 00:35:23.724 "percentage": 50, 00:35:23.724 "status": "finished", 00:35:23.724 "queue_depth": 128, 00:35:23.724 "io_size": 4096, 00:35:23.724 "runtime": 1.004117, 00:35:23.724 "iops": 19239.78978545329, 00:35:23.724 "mibps": 75.15542884942691, 00:35:23.724 "io_failed": 0, 00:35:23.724 "io_timeout": 0, 00:35:23.724 "avg_latency_us": 6641.337606452074, 00:35:23.724 "min_latency_us": 2481.0057142857145, 00:35:23.724 "max_latency_us": 17975.588571428572 00:35:23.724 } 00:35:23.724 ], 00:35:23.724 "core_count": 1 00:35:23.724 } 00:35:23.724 17:29:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:23.724 17:29:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:23.981 17:29:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:23.981 17:29:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:23.981 17:29:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:23.981 17:29:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:23.981 17:29:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:23.981 17:29:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.239 17:29:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:24.239 17:29:42 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:24.239 17:29:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:24.239 17:29:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:24.239 17:29:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:24.239 17:29:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:24.239 17:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.239 17:29:42 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:24.239 17:29:42 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:24.240 17:29:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:24.240 17:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:24.498 [2024-11-20 17:29:42.415373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:24.498 [2024-11-20 17:29:42.416081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894e70 (107): Transport endpoint is not connected 00:35:24.498 [2024-11-20 17:29:42.417076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894e70 (9): Bad file descriptor 00:35:24.498 [2024-11-20 17:29:42.418077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:24.498 [2024-11-20 17:29:42.418086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:24.498 [2024-11-20 17:29:42.418094] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:24.498 [2024-11-20 17:29:42.418102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:24.498 request: 00:35:24.498 { 00:35:24.498 "name": "nvme0", 00:35:24.498 "trtype": "tcp", 00:35:24.498 "traddr": "127.0.0.1", 00:35:24.498 "adrfam": "ipv4", 00:35:24.498 "trsvcid": "4420", 00:35:24.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.498 "prchk_reftag": false, 00:35:24.498 "prchk_guard": false, 00:35:24.498 "hdgst": false, 00:35:24.498 "ddgst": false, 00:35:24.498 "psk": "key1", 00:35:24.498 "allow_unrecognized_csi": false, 00:35:24.498 "method": "bdev_nvme_attach_controller", 00:35:24.498 "req_id": 1 00:35:24.498 } 00:35:24.498 Got JSON-RPC error response 00:35:24.498 response: 00:35:24.498 { 00:35:24.498 "code": -5, 00:35:24.498 "message": "Input/output error" 00:35:24.498 } 00:35:24.498 17:29:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:24.498 17:29:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:24.498 17:29:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:24.498 17:29:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:24.498 17:29:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:24.498 17:29:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:24.498 17:29:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:24.498 17:29:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:24.498 17:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:24.498 17:29:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:24.757 17:29:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:24.757 17:29:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:24.757 17:29:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:24.757 17:29:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:24.757 17:29:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:24.757 17:29:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:24.757 17:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.016 17:29:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:25.016 17:29:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:25.016 17:29:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:25.016 17:29:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:25.016 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:25.275 17:29:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:25.275 17:29:43 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:25.275 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.534 17:29:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:25.534 17:29:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.534 [2024-11-20 17:29:43.547049] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.abGGDx65zC': 0100660 00:35:25.534 [2024-11-20 17:29:43.547074] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:25.534 request: 00:35:25.534 { 00:35:25.534 "name": "key0", 00:35:25.534 "path": "/tmp/tmp.abGGDx65zC", 00:35:25.534 "method": "keyring_file_add_key", 00:35:25.534 "req_id": 1 00:35:25.534 } 00:35:25.534 Got JSON-RPC error response 00:35:25.534 response: 00:35:25.534 { 00:35:25.534 "code": -1, 00:35:25.534 "message": "Operation not permitted" 00:35:25.534 } 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:25.534 17:29:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:25.534 17:29:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.534 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.abGGDx65zC 00:35:25.792 17:29:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.abGGDx65zC 00:35:25.792 17:29:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:25.792 17:29:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:25.792 17:29:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:25.792 17:29:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:25.792 17:29:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:25.792 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:26.051 17:29:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:26.051 17:29:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.051 17:29:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.051 17:29:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.310 [2024-11-20 17:29:44.144631] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.abGGDx65zC': No such file or directory 00:35:26.310 [2024-11-20 17:29:44.144652] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:26.310 [2024-11-20 17:29:44.144666] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:26.310 [2024-11-20 17:29:44.144673] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:26.310 [2024-11-20 17:29:44.144696] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:26.310 [2024-11-20 17:29:44.144702] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:26.310 request: 00:35:26.310 { 00:35:26.310 "name": "nvme0", 00:35:26.310 "trtype": "tcp", 00:35:26.310 "traddr": "127.0.0.1", 00:35:26.310 "adrfam": "ipv4", 00:35:26.310 "trsvcid": "4420", 00:35:26.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.310 "prchk_reftag": false, 00:35:26.310 "prchk_guard": false, 00:35:26.310 "hdgst": false, 00:35:26.310 "ddgst": false, 00:35:26.310 "psk": "key0", 00:35:26.310 "allow_unrecognized_csi": false, 00:35:26.310 "method": "bdev_nvme_attach_controller", 00:35:26.310 "req_id": 1 00:35:26.310 } 00:35:26.310 Got JSON-RPC error response 00:35:26.310 response: 00:35:26.310 { 00:35:26.310 "code": -19, 00:35:26.310 "message": "No such device" 00:35:26.310 } 00:35:26.310 17:29:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:26.310 17:29:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.310 17:29:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.310 17:29:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.310 17:29:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:26.310 17:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:26.568 17:29:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:26.569 17:29:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uSKGWuuT01 00:35:26.569 17:29:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.569 17:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:26.827 nvme0n1 00:35:27.086 17:29:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:27.086 17:29:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:27.086 17:29:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:27.086 17:29:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:27.086 17:29:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:27.086 17:29:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.086 17:29:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:27.086 17:29:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:27.086 17:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:27.344 17:29:45 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:27.344 17:29:45 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:27.344 17:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:27.344 17:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:27.344 17:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.603 17:29:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:27.603 17:29:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:27.603 17:29:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:27.603 17:29:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:27.603 17:29:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:27.603 17:29:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:27.603 17:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:27.861 17:29:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:27.861 17:29:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:27.861 17:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:27.861 17:29:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:27.861 17:29:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:27.861 17:29:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:28.121 17:29:46 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:28.121 17:29:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uSKGWuuT01 00:35:28.121 17:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uSKGWuuT01 00:35:28.381 17:29:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F5uHB0lNrx 00:35:28.381 17:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F5uHB0lNrx 00:35:28.640 17:29:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:28.640 17:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:28.640 nvme0n1 00:35:28.899 17:29:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:28.899 17:29:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:29.158 17:29:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:29.158 "subsystems": [ 00:35:29.158 { 00:35:29.158 "subsystem": "keyring", 00:35:29.158 "config": [ 00:35:29.158 { 00:35:29.158 "method": "keyring_file_add_key", 00:35:29.158 "params": { 00:35:29.158 "name": "key0", 00:35:29.158 "path": "/tmp/tmp.uSKGWuuT01" 00:35:29.158 } 00:35:29.158 }, 00:35:29.158 { 00:35:29.158 "method": "keyring_file_add_key", 00:35:29.158 "params": { 00:35:29.158 "name": "key1", 00:35:29.158 "path": "/tmp/tmp.F5uHB0lNrx" 00:35:29.158 } 00:35:29.158 } 00:35:29.158 ] 00:35:29.158 }, 00:35:29.158 { 00:35:29.158 "subsystem": "iobuf", 00:35:29.158 "config": [ 00:35:29.158 { 00:35:29.158 "method": "iobuf_set_options", 00:35:29.158 "params": { 00:35:29.158 "small_pool_count": 8192, 00:35:29.158 "large_pool_count": 1024, 00:35:29.158 "small_bufsize": 8192, 00:35:29.158 "large_bufsize": 135168, 00:35:29.158 "enable_numa": false 00:35:29.158 } 00:35:29.158 } 00:35:29.158 ] 00:35:29.158 }, 00:35:29.158 { 00:35:29.158 "subsystem": "sock", 00:35:29.158 "config": [ 00:35:29.158 { 00:35:29.158 "method": "sock_set_default_impl", 00:35:29.158 "params": { 00:35:29.158 "impl_name": "posix" 00:35:29.158 } 00:35:29.158 }, 00:35:29.158 { 00:35:29.158 "method": "sock_impl_set_options", 00:35:29.158 "params": { 00:35:29.158 "impl_name": "ssl", 00:35:29.159 "recv_buf_size": 4096, 00:35:29.159 "send_buf_size": 4096, 00:35:29.159 "enable_recv_pipe": true, 00:35:29.159 "enable_quickack": false, 00:35:29.159 "enable_placement_id": 0, 00:35:29.159 "enable_zerocopy_send_server": true, 00:35:29.159 "enable_zerocopy_send_client": false, 00:35:29.159 "zerocopy_threshold": 0, 00:35:29.159 "tls_version": 0, 00:35:29.159 "enable_ktls": false 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "sock_impl_set_options", 00:35:29.159 "params": { 00:35:29.159 "impl_name": "posix", 00:35:29.159 "recv_buf_size": 2097152, 00:35:29.159 "send_buf_size": 2097152, 00:35:29.159 "enable_recv_pipe": true, 00:35:29.159 "enable_quickack": false, 00:35:29.159 "enable_placement_id": 0, 00:35:29.159 "enable_zerocopy_send_server": true, 00:35:29.159 "enable_zerocopy_send_client": false, 00:35:29.159 "zerocopy_threshold": 0, 00:35:29.159 "tls_version": 0, 00:35:29.159 "enable_ktls": false 00:35:29.159 } 00:35:29.159 } 00:35:29.159 ] 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "subsystem": "vmd", 00:35:29.159 "config": [] 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "subsystem": "accel", 00:35:29.159 "config": [ 00:35:29.159 { 00:35:29.159 "method": "accel_set_options", 00:35:29.159 "params": { 00:35:29.159 "small_cache_size": 128, 00:35:29.159 "large_cache_size": 16, 00:35:29.159 "task_count": 2048, 00:35:29.159 "sequence_count": 2048, 00:35:29.159 "buf_count": 2048 00:35:29.159 } 00:35:29.159 } 00:35:29.159 ] 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "subsystem": "bdev", 00:35:29.159 "config": [ 00:35:29.159 { 00:35:29.159 "method": "bdev_set_options", 00:35:29.159 "params": { 00:35:29.159 "bdev_io_pool_size": 65535, 00:35:29.159 "bdev_io_cache_size": 256, 00:35:29.159 "bdev_auto_examine": true, 00:35:29.159 "iobuf_small_cache_size": 128, 00:35:29.159 "iobuf_large_cache_size": 16 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_raid_set_options", 00:35:29.159 "params": { 00:35:29.159 "process_window_size_kb": 1024, 00:35:29.159 "process_max_bandwidth_mb_sec": 0 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_iscsi_set_options", 00:35:29.159 "params": { 00:35:29.159 "timeout_sec": 30 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_nvme_set_options", 00:35:29.159 "params": { 00:35:29.159 "action_on_timeout": "none", 00:35:29.159 "timeout_us": 0, 00:35:29.159 "timeout_admin_us": 0, 00:35:29.159 "keep_alive_timeout_ms": 10000, 00:35:29.159 "arbitration_burst": 0, 00:35:29.159 "low_priority_weight": 0, 00:35:29.159 "medium_priority_weight": 0, 00:35:29.159 "high_priority_weight": 0, 00:35:29.159 "nvme_adminq_poll_period_us": 10000, 00:35:29.159 "nvme_ioq_poll_period_us": 0, 00:35:29.159 "io_queue_requests": 512, 00:35:29.159 "delay_cmd_submit": true, 00:35:29.159 "transport_retry_count": 4, 00:35:29.159 "bdev_retry_count": 3, 00:35:29.159 "transport_ack_timeout": 0, 00:35:29.159 "ctrlr_loss_timeout_sec": 0, 00:35:29.159 "reconnect_delay_sec": 0, 00:35:29.159 "fast_io_fail_timeout_sec": 0, 00:35:29.159 "disable_auto_failback": false, 00:35:29.159 "generate_uuids": false, 00:35:29.159 "transport_tos": 0, 00:35:29.159 "nvme_error_stat": false, 00:35:29.159 "rdma_srq_size": 0, 00:35:29.159 "io_path_stat": false, 00:35:29.159 "allow_accel_sequence": false, 00:35:29.159 "rdma_max_cq_size": 0, 00:35:29.159 "rdma_cm_event_timeout_ms": 0, 00:35:29.159 "dhchap_digests": [ 00:35:29.159 "sha256", 00:35:29.159 "sha384", 00:35:29.159 "sha512" 00:35:29.159 ], 00:35:29.159 "dhchap_dhgroups": [ 00:35:29.159 "null", 00:35:29.159 "ffdhe2048", 00:35:29.159 "ffdhe3072", 00:35:29.159 "ffdhe4096", 00:35:29.159 "ffdhe6144", 00:35:29.159 "ffdhe8192" 00:35:29.159 ] 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_nvme_attach_controller", 00:35:29.159 "params": { 00:35:29.159 "name": "nvme0", 00:35:29.159 "trtype": "TCP", 00:35:29.159 "adrfam": "IPv4", 00:35:29.159 "traddr": "127.0.0.1", 00:35:29.159 "trsvcid": "4420", 00:35:29.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.159 "prchk_reftag": false, 00:35:29.159 "prchk_guard": false, 00:35:29.159 "ctrlr_loss_timeout_sec": 0, 00:35:29.159 "reconnect_delay_sec": 0, 00:35:29.159 "fast_io_fail_timeout_sec": 0, 00:35:29.159 "psk": "key0", 00:35:29.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.159 "hdgst": false, 00:35:29.159 "ddgst": false, 00:35:29.159 "multipath": "multipath" 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_nvme_set_hotplug", 00:35:29.159 "params": { 00:35:29.159 "period_us": 100000, 00:35:29.159 "enable": false 00:35:29.159 } 00:35:29.159 }, 00:35:29.159 { 00:35:29.159 "method": "bdev_wait_for_examine" 00:35:29.159 } 00:35:29.159 ] 00:35:29.159 }, 00:35:29.159 { 00:35:29.160 "subsystem": "nbd", 00:35:29.160 "config": [] 00:35:29.160 } 00:35:29.160 ] 00:35:29.160 }' 00:35:29.160 17:29:46 keyring_file -- keyring/file.sh@115 -- # killprocess 2775017 00:35:29.160 17:29:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2775017 ']' 00:35:29.160 17:29:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2775017 00:35:29.160 17:29:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:29.160 17:29:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.160 17:29:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775017 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775017' 00:35:29.160 killing process with pid 2775017 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@973 -- # kill 2775017 00:35:29.160 Received shutdown signal, test time was about 1.000000 seconds 00:35:29.160 00:35:29.160 Latency(us) 00:35:29.160 [2024-11-20T16:29:47.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.160 [2024-11-20T16:29:47.203Z] =================================================================================================================== 00:35:29.160 [2024-11-20T16:29:47.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@978 -- # wait 2775017 00:35:29.160 17:29:47 keyring_file -- keyring/file.sh@118 -- # bperfpid=2776536 00:35:29.160 17:29:47 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2776536 /var/tmp/bperf.sock 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2776536 ']' 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.160 17:29:47 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.160 17:29:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.160 17:29:47 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:29.160 "subsystems": [ 00:35:29.160 { 00:35:29.160 "subsystem": "keyring", 00:35:29.160 "config": [ 00:35:29.160 { 00:35:29.160 "method": "keyring_file_add_key", 00:35:29.160 "params": { 00:35:29.160 "name": "key0", 00:35:29.160 "path": "/tmp/tmp.uSKGWuuT01" 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "keyring_file_add_key", 00:35:29.160 "params": { 00:35:29.160 "name": "key1", 00:35:29.160 "path": "/tmp/tmp.F5uHB0lNrx" 00:35:29.160 } 00:35:29.160 } 00:35:29.160 ] 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "subsystem": "iobuf", 00:35:29.160 "config": [ 00:35:29.160 { 00:35:29.160 "method": "iobuf_set_options", 00:35:29.160 "params": { 00:35:29.160 "small_pool_count": 8192, 00:35:29.160 "large_pool_count": 1024, 00:35:29.160 "small_bufsize": 8192, 00:35:29.160 "large_bufsize": 135168, 00:35:29.160 "enable_numa": false 00:35:29.160 } 00:35:29.160 } 00:35:29.160 ] 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "subsystem": "sock", 00:35:29.160 "config": [ 00:35:29.160 { 00:35:29.160 "method": "sock_set_default_impl", 00:35:29.160 "params": { 00:35:29.160 "impl_name": "posix" 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "sock_impl_set_options", 00:35:29.160 "params": { 00:35:29.160 "impl_name": "ssl", 00:35:29.160 "recv_buf_size": 4096, 00:35:29.160 "send_buf_size": 4096, 00:35:29.160 "enable_recv_pipe": true, 00:35:29.160 "enable_quickack": false, 00:35:29.160 "enable_placement_id": 0, 00:35:29.160 "enable_zerocopy_send_server": true, 00:35:29.160 "enable_zerocopy_send_client": false, 00:35:29.160 "zerocopy_threshold": 0, 00:35:29.160 "tls_version": 0, 00:35:29.160 "enable_ktls": false 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "sock_impl_set_options", 00:35:29.160 "params": { 00:35:29.160 "impl_name": "posix", 00:35:29.160 "recv_buf_size": 2097152, 00:35:29.160 "send_buf_size": 2097152, 00:35:29.160 "enable_recv_pipe": true, 00:35:29.160 "enable_quickack": false, 00:35:29.160 "enable_placement_id": 0, 00:35:29.160 "enable_zerocopy_send_server": true, 00:35:29.160 "enable_zerocopy_send_client": false, 00:35:29.160 "zerocopy_threshold": 0, 00:35:29.160 "tls_version": 0, 00:35:29.160 "enable_ktls": false 00:35:29.160 } 00:35:29.160 } 00:35:29.160 ] 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "subsystem": "vmd", 00:35:29.160 "config": [] 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "subsystem": "accel", 00:35:29.160 "config": [ 00:35:29.160 { 00:35:29.160 "method": "accel_set_options", 00:35:29.160 "params": { 00:35:29.160 "small_cache_size": 128, 00:35:29.160 "large_cache_size": 16, 00:35:29.160 "task_count": 2048, 00:35:29.160 "sequence_count": 2048, 00:35:29.160 "buf_count": 2048 00:35:29.160 } 00:35:29.160 } 00:35:29.160 ] 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "subsystem": "bdev", 00:35:29.160 "config": [ 00:35:29.160 { 00:35:29.160 "method": "bdev_set_options", 00:35:29.160 "params": { 00:35:29.160 "bdev_io_pool_size": 65535, 00:35:29.160 "bdev_io_cache_size": 256, 00:35:29.160 "bdev_auto_examine": true, 00:35:29.160 "iobuf_small_cache_size": 128, 00:35:29.160 "iobuf_large_cache_size": 16 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "bdev_raid_set_options", 00:35:29.160 "params": { 00:35:29.160 "process_window_size_kb": 1024, 00:35:29.160 "process_max_bandwidth_mb_sec": 0 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "bdev_iscsi_set_options", 00:35:29.160 "params": { 00:35:29.160 "timeout_sec": 30 00:35:29.160 } 00:35:29.160 }, 00:35:29.160 { 00:35:29.160 "method": "bdev_nvme_set_options", 00:35:29.160 "params": { 00:35:29.160 "action_on_timeout": "none", 00:35:29.160 "timeout_us": 0, 00:35:29.160 "timeout_admin_us": 0, 00:35:29.160 "keep_alive_timeout_ms": 10000, 00:35:29.160 "arbitration_burst": 0, 00:35:29.160 "low_priority_weight": 0, 00:35:29.160 "medium_priority_weight": 0, 00:35:29.160 "high_priority_weight": 0, 00:35:29.160 "nvme_adminq_poll_period_us": 10000, 00:35:29.160 "nvme_ioq_poll_period_us": 0, 00:35:29.160 "io_queue_requests": 512, 00:35:29.160 "delay_cmd_submit": true, 00:35:29.160 "transport_retry_count": 4, 00:35:29.160 "bdev_retry_count": 3, 00:35:29.160 "transport_ack_timeout": 0, 00:35:29.160 "ctrlr_loss_timeout_sec": 0, 00:35:29.160 "reconnect_delay_sec": 0, 00:35:29.160 "fast_io_fail_timeout_sec": 0, 00:35:29.161 "disable_auto_failback": false, 00:35:29.161 "generate_uuids": false, 00:35:29.161 "transport_tos": 0, 00:35:29.161 "nvme_error_stat": false, 00:35:29.161 "rdma_srq_size": 0, 00:35:29.161 "io_path_stat": false, 00:35:29.161 "allow_accel_sequence": false, 00:35:29.161 "rdma_max_cq_size": 0, 00:35:29.161 "rdma_cm_event_timeout_ms": 0, 00:35:29.161 "dhchap_digests": [ 00:35:29.161 "sha256", 00:35:29.161 "sha384", 00:35:29.161 "sha512" 00:35:29.161 ], 00:35:29.161 "dhchap_dhgroups": [ 00:35:29.161 "null", 00:35:29.161 "ffdhe2048", 00:35:29.161 "ffdhe3072", 00:35:29.161 "ffdhe4096", 00:35:29.161 "ffdhe6144", 00:35:29.161 "ffdhe8192" 00:35:29.161 ] 00:35:29.161 } 00:35:29.161 }, 00:35:29.161 { 00:35:29.161 "method": "bdev_nvme_attach_controller", 00:35:29.161 "params": { 00:35:29.161 "name": "nvme0", 00:35:29.161 "trtype": "TCP", 00:35:29.161 "adrfam": "IPv4", 00:35:29.161 "traddr": "127.0.0.1", 00:35:29.161 "trsvcid": "4420", 00:35:29.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.161 "prchk_reftag": false, 00:35:29.161 "prchk_guard": false, 00:35:29.161 "ctrlr_loss_timeout_sec": 0, 00:35:29.161 "reconnect_delay_sec": 0, 00:35:29.161 "fast_io_fail_timeout_sec": 0, 00:35:29.161 "psk": "key0", 00:35:29.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.161 "hdgst": false, 00:35:29.161 "ddgst": false, 00:35:29.161 "multipath": "multipath" 00:35:29.161 } 00:35:29.161 }, 00:35:29.161 { 00:35:29.161 "method": "bdev_nvme_set_hotplug", 00:35:29.161 "params": { 00:35:29.161 "period_us": 100000, 00:35:29.161 "enable": false 00:35:29.161 } 00:35:29.161 }, 00:35:29.161 { 00:35:29.161 "method": "bdev_wait_for_examine" 00:35:29.161 } 00:35:29.161 ] 00:35:29.161 }, 00:35:29.161 { 00:35:29.161 "subsystem": "nbd", 00:35:29.161 "config": [] 00:35:29.161 } 00:35:29.161 ] 00:35:29.161 }' 00:35:29.161 17:29:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.161 17:29:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:29.420 [2024-11-20 17:29:47.216286] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:35:29.420 [2024-11-20 17:29:47.216331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776536 ] 00:35:29.420 [2024-11-20 17:29:47.290481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.420 [2024-11-20 17:29:47.331989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.678 [2024-11-20 17:29:47.493092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:30.245 17:29:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.245 17:29:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:30.245 17:29:48 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:30.245 17:29:48 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:30.245 17:29:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:30.245 17:29:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:30.245 17:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:30.526 17:29:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:30.526 17:29:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:30.526 17:29:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:30.526 17:29:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:30.526 17:29:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:30.526 17:29:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:30.526 17:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:30.822 17:29:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uSKGWuuT01 /tmp/tmp.F5uHB0lNrx 00:35:30.822 17:29:48 keyring_file -- keyring/file.sh@20 -- # killprocess 2776536 00:35:30.822 17:29:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2776536 ']' 00:35:30.822 17:29:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2776536 00:35:30.822 17:29:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:30.822 17:29:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.822 17:29:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2776536 00:35:31.128 17:29:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:31.128 17:29:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:31.128 17:29:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2776536' 00:35:31.129 killing process with pid 2776536 00:35:31.129 17:29:48 keyring_file -- common/autotest_common.sh@973 -- # kill 2776536 00:35:31.129 Received shutdown signal, test time was about 1.000000 seconds 00:35:31.129 00:35:31.129 Latency(us) 00:35:31.129 [2024-11-20T16:29:49.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.129 [2024-11-20T16:29:49.172Z] =================================================================================================================== 00:35:31.129 [2024-11-20T16:29:49.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:31.129 17:29:48 keyring_file -- common/autotest_common.sh@978 -- # wait 2776536 00:35:31.129 17:29:49 keyring_file -- keyring/file.sh@21 -- # killprocess 2775012 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2775012 ']' 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2775012 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775012 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775012' 00:35:31.129 killing process with pid 2775012 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@973 -- # kill 2775012 00:35:31.129 17:29:49 keyring_file -- common/autotest_common.sh@978 -- # wait 2775012 00:35:31.431 00:35:31.431 real 0m11.705s 00:35:31.431 user 0m29.215s 00:35:31.431 sys 0m2.564s 00:35:31.431 17:29:49 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.431 17:29:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:31.431 ************************************ 00:35:31.431 END TEST keyring_file 00:35:31.431 ************************************ 00:35:31.431 17:29:49 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:31.431 17:29:49 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:31.431 17:29:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:31.431 17:29:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.431 17:29:49 -- common/autotest_common.sh@10 -- # set +x 00:35:31.690 ************************************ 00:35:31.690 START TEST keyring_linux 00:35:31.690 ************************************ 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:31.690 Joined session keyring: 857353604 00:35:31.690 * Looking for test storage... 00:35:31.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.690 17:29:49 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.690 --rc genhtml_branch_coverage=1 00:35:31.690 --rc genhtml_function_coverage=1 00:35:31.690 --rc genhtml_legend=1 00:35:31.690 --rc geninfo_all_blocks=1 00:35:31.690 --rc geninfo_unexecuted_blocks=1 00:35:31.690 00:35:31.690 ' 00:35:31.690 17:29:49 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.690 --rc genhtml_branch_coverage=1 00:35:31.690 --rc genhtml_function_coverage=1 00:35:31.691 --rc genhtml_legend=1 00:35:31.691 --rc geninfo_all_blocks=1 00:35:31.691 --rc geninfo_unexecuted_blocks=1 00:35:31.691 00:35:31.691 ' 00:35:31.691 17:29:49 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:31.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.691 --rc genhtml_branch_coverage=1 00:35:31.691 --rc genhtml_function_coverage=1 00:35:31.691 --rc genhtml_legend=1 00:35:31.691 --rc geninfo_all_blocks=1 00:35:31.691 --rc geninfo_unexecuted_blocks=1 00:35:31.691 00:35:31.691 ' 00:35:31.691 17:29:49 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:31.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.691 --rc genhtml_branch_coverage=1 00:35:31.691 --rc genhtml_function_coverage=1 00:35:31.691 --rc genhtml_legend=1 00:35:31.691 --rc geninfo_all_blocks=1 00:35:31.691 --rc geninfo_unexecuted_blocks=1 00:35:31.691 00:35:31.691 ' 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.691 17:29:49 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.691 17:29:49 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.691 17:29:49 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.691 17:29:49 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.691 17:29:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.691 17:29:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.691 17:29:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.691 17:29:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:31.691 17:29:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:31.691 17:29:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:31.691 /tmp/:spdk-test:key0 00:35:31.691 17:29:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:31.691 17:29:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:31.951 17:29:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:31.951 17:29:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:31.951 17:29:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:31.951 /tmp/:spdk-test:key1 00:35:31.951 17:29:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2777099 00:35:31.951 17:29:49 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:31.951 17:29:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2777099 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2777099 ']' 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.951 17:29:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:31.951 [2024-11-20 17:29:49.822273] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:35:31.951 [2024-11-20 17:29:49.822322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777099 ] 00:35:31.951 [2024-11-20 17:29:49.897416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.951 [2024-11-20 17:29:49.940321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:32.210 [2024-11-20 17:29:50.150714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.210 null0 00:35:32.210 [2024-11-20 17:29:50.182764] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:32.210 [2024-11-20 17:29:50.183150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:32.210 81910789 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:32.210 929613330 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2777106 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2777106 /var/tmp/bperf.sock 00:35:32.210 17:29:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2777106 ']' 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.210 17:29:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:32.468 [2024-11-20 17:29:50.255626] Starting SPDK v25.01-pre git sha1 0b4b4be7e / DPDK 24.03.0 initialization... 00:35:32.468 [2024-11-20 17:29:50.255667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777106 ] 00:35:32.468 [2024-11-20 17:29:50.329578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.468 [2024-11-20 17:29:50.371897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.468 17:29:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.468 17:29:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:32.468 17:29:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:32.468 17:29:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:32.727 17:29:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:32.727 17:29:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:32.986 17:29:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:32.986 17:29:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:32.986 [2024-11-20 17:29:51.017876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:33.245 nvme0n1 00:35:33.245 17:29:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:33.245 17:29:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:33.245 17:29:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:33.245 17:29:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:33.245 17:29:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:33.245 17:29:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:33.504 17:29:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:33.504 17:29:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:33.504 17:29:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@25 -- # sn=81910789 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 81910789 == \8\1\9\1\0\7\8\9 ]] 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 81910789 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:33.504 17:29:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:33.763 Running I/O for 1 seconds... 00:35:34.699 21670.00 IOPS, 84.65 MiB/s 00:35:34.699 Latency(us) 00:35:34.699 [2024-11-20T16:29:52.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.699 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:34.699 nvme0n1 : 1.01 21672.97 84.66 0.00 0.00 5887.05 4525.10 10298.51 00:35:34.699 [2024-11-20T16:29:52.742Z] =================================================================================================================== 00:35:34.699 [2024-11-20T16:29:52.742Z] Total : 21672.97 84.66 0.00 0.00 5887.05 4525.10 10298.51 00:35:34.699 { 00:35:34.699 "results": [ 00:35:34.699 { 00:35:34.699 "job": "nvme0n1", 00:35:34.699 "core_mask": "0x2", 00:35:34.699 "workload": "randread", 00:35:34.699 "status": "finished", 00:35:34.699 "queue_depth": 128, 00:35:34.699 "io_size": 4096, 00:35:34.699 "runtime": 1.005815, 00:35:34.699 "iops": 21672.97166974046, 00:35:34.699 "mibps": 84.66004558492367, 00:35:34.699 "io_failed": 0, 00:35:34.699 "io_timeout": 0, 00:35:34.699 "avg_latency_us": 5887.049121606714, 00:35:34.699 "min_latency_us": 4525.104761904762, 00:35:34.699 "max_latency_us": 10298.514285714286 00:35:34.699 } 00:35:34.699 ], 00:35:34.699 "core_count": 1 00:35:34.699 } 00:35:34.699 17:29:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:34.699 17:29:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:34.958 17:29:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:34.958 17:29:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:34.958 17:29:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:34.958 17:29:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:34.958 17:29:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:34.958 17:29:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:35.218 17:29:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:35.218 [2024-11-20 17:29:53.201633] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:35.218 [2024-11-20 17:29:53.202376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfbf60 (107): Transport endpoint is not connected 00:35:35.218 [2024-11-20 17:29:53.203370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfbf60 (9): Bad file descriptor 00:35:35.218 [2024-11-20 17:29:53.204371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:35.218 [2024-11-20 17:29:53.204380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:35.218 [2024-11-20 17:29:53.204388] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:35.218 [2024-11-20 17:29:53.204396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:35.218 request: 00:35:35.218 { 00:35:35.218 "name": "nvme0", 00:35:35.218 "trtype": "tcp", 00:35:35.218 "traddr": "127.0.0.1", 00:35:35.218 "adrfam": "ipv4", 00:35:35.218 "trsvcid": "4420", 00:35:35.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:35.218 "prchk_reftag": false, 00:35:35.218 "prchk_guard": false, 00:35:35.218 "hdgst": false, 00:35:35.218 "ddgst": false, 00:35:35.218 "psk": ":spdk-test:key1", 00:35:35.218 "allow_unrecognized_csi": false, 00:35:35.218 "method": "bdev_nvme_attach_controller", 00:35:35.218 "req_id": 1 00:35:35.218 } 00:35:35.218 Got JSON-RPC error response 00:35:35.218 response: 00:35:35.218 { 00:35:35.218 "code": -5, 00:35:35.218 "message": "Input/output error" 00:35:35.218 } 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@33 -- # sn=81910789 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 81910789 00:35:35.218 1 links removed 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@33 -- # sn=929613330 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 929613330 00:35:35.218 1 links removed 00:35:35.218 17:29:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2777106 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2777106 ']' 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2777106 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.218 17:29:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777106 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777106' 00:35:35.478 killing process with pid 2777106 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2777106 00:35:35.478 Received shutdown signal, test time was about 1.000000 seconds 00:35:35.478 00:35:35.478 Latency(us) 00:35:35.478 [2024-11-20T16:29:53.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.478 [2024-11-20T16:29:53.521Z] =================================================================================================================== 00:35:35.478 [2024-11-20T16:29:53.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2777106 00:35:35.478 17:29:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2777099 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2777099 ']' 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2777099 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777099 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777099' 00:35:35.478 killing process with pid 2777099 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2777099 00:35:35.478 17:29:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2777099 00:35:36.046 00:35:36.046 real 0m4.336s 00:35:36.046 user 0m8.221s 00:35:36.046 sys 0m1.395s 00:35:36.046 17:29:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.046 17:29:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:36.046 ************************************ 00:35:36.046 END TEST keyring_linux 00:35:36.046 ************************************ 00:35:36.046 17:29:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:36.046 17:29:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:36.046 17:29:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:36.046 17:29:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:36.046 17:29:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:36.046 17:29:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:36.046 17:29:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:36.046 17:29:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.046 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:35:36.046 17:29:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:36.046 17:29:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:36.046 17:29:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:36.046 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:35:41.320 INFO: APP EXITING 00:35:41.320 INFO: killing all VMs 00:35:41.320 INFO: killing vhost app 00:35:41.320 INFO: EXIT DONE 00:35:43.854 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:43.854 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:43.854 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:47.144 Cleaning 00:35:47.144 Removing: /var/run/dpdk/spdk0/config 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:47.144 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:47.144 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:47.144 Removing: /var/run/dpdk/spdk1/config 00:35:47.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:47.144 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:47.145 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:47.145 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:47.145 Removing: /var/run/dpdk/spdk2/config 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:47.145 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:47.145 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:47.145 Removing: /var/run/dpdk/spdk3/config 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:47.145 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:47.145 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:47.145 Removing: /var/run/dpdk/spdk4/config 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:47.145 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:47.145 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:47.145 Removing: /dev/shm/bdev_svc_trace.1 00:35:47.145 Removing: /dev/shm/nvmf_trace.0 00:35:47.145 Removing: /dev/shm/spdk_tgt_trace.pid2297835 00:35:47.145 Removing: /var/run/dpdk/spdk0 00:35:47.145 Removing: /var/run/dpdk/spdk1 00:35:47.145 Removing: /var/run/dpdk/spdk2 00:35:47.145 Removing: /var/run/dpdk/spdk3 00:35:47.145 Removing: /var/run/dpdk/spdk4 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2295453 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2296530 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2297835 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2298478 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2299430 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2299669 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2300640 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2300659 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2301011 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2302745 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2304114 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2304551 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2304734 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2304940 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2305229 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2305484 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2305731 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2306017 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2306762 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2309768 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2310023 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2310273 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2310283 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2310775 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2310783 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2311276 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2311502 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2311768 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2311872 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2312043 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2312220 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2312620 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2312868 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2313163 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2317094 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2321595 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2332190 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2332883 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2337158 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2337408 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2341674 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2347564 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2350329 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2360602 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2369520 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2371353 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2372408 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2389690 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2393758 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2439507 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2444900 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2450667 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2457163 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2457165 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2458082 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2458841 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2459698 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2460379 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2460383 00:35:47.145 Removing: /var/run/dpdk/spdk_pid2460620 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2460643 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2460780 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2461561 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2462484 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2463397 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2463871 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2464006 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2464315 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2465335 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2466323 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2474987 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2503872 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2508361 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2510140 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2511791 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2512023 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2512190 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2512270 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2512776 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2514609 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2515372 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2515871 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2517979 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2518466 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2519179 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2523264 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2528869 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2528870 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2528871 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2532755 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2541230 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2545051 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2551684 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2553029 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2554404 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2555724 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2560422 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2564766 00:35:47.404 Removing: /var/run/dpdk/spdk_pid2568791 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2576168 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2576170 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2580882 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2581111 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2581339 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2581705 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2581803 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2586402 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2586887 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2591437 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2593984 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2600045 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2605303 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2614009 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2621223 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2621226 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2640015 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2640497 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2641191 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2641665 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2642404 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2642876 00:35:47.405 Removing: /var/run/dpdk/spdk_pid2643562 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2644440 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2648593 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2648882 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2654934 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2655167 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2660425 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2664657 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2674391 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2674952 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2679109 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2679389 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2683616 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2689259 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2692350 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2702289 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2711117 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2712786 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2713696 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2729606 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2733412 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2736169 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2744584 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2744679 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2749848 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2751815 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2753781 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2754831 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2756837 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2758080 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2766839 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2767309 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2767768 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2770249 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2770717 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2771180 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2775012 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2775017 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2776536 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2777099 00:35:47.664 Removing: /var/run/dpdk/spdk_pid2777106 00:35:47.664 Clean 00:35:47.664 17:30:05 -- common/autotest_common.sh@1453 -- # return 0 00:35:47.664 17:30:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:47.664 17:30:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.664 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:35:47.923 17:30:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:47.923 17:30:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.923 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:35:47.923 17:30:05 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:47.923 17:30:05 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:47.923 17:30:05 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:47.923 17:30:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:47.923 17:30:05 -- spdk/autotest.sh@398 -- # hostname 00:35:47.923 17:30:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:47.923 geninfo: WARNING: invalid characters removed from testname! 00:36:09.856 17:30:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:11.234 17:30:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:13.139 17:30:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:15.044 17:30:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:16.948 17:30:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:18.881 17:30:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:20.786 17:30:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:20.786 17:30:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:20.786 17:30:38 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:20.786 17:30:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:20.786 17:30:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:20.786 17:30:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:20.786 + [[ -n 2218176 ]] 00:36:20.786 + sudo kill 2218176 00:36:20.795 [Pipeline] } 00:36:20.810 [Pipeline] // stage 00:36:20.816 [Pipeline] } 00:36:20.831 [Pipeline] // timeout 00:36:20.836 [Pipeline] } 00:36:20.852 [Pipeline] // catchError 00:36:20.858 [Pipeline] } 00:36:20.873 [Pipeline] // wrap 00:36:20.880 [Pipeline] } 00:36:20.893 [Pipeline] // catchError 00:36:20.902 [Pipeline] stage 00:36:20.904 [Pipeline] { (Epilogue) 00:36:20.917 [Pipeline] catchError 00:36:20.919 [Pipeline] { 00:36:20.931 [Pipeline] echo 00:36:20.933 Cleanup processes 00:36:20.939 [Pipeline] sh 00:36:21.225 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:21.225 2788307 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:21.245 [Pipeline] sh 00:36:21.535 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:21.535 ++ grep -v 'sudo pgrep' 00:36:21.535 ++ awk '{print $1}' 00:36:21.535 + sudo kill -9 00:36:21.535 + true 00:36:21.548 [Pipeline] sh 00:36:21.837 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:34.062 [Pipeline] sh 00:36:34.346 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:34.346 Artifacts sizes are good 00:36:34.361 [Pipeline] archiveArtifacts 00:36:34.369 Archiving artifacts 00:36:34.526 [Pipeline] sh 00:36:34.880 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:34.896 [Pipeline] cleanWs 00:36:34.906 [WS-CLEANUP] Deleting project workspace... 00:36:34.906 [WS-CLEANUP] Deferred wipeout is used... 00:36:34.913 [WS-CLEANUP] done 00:36:34.915 [Pipeline] } 00:36:34.934 [Pipeline] // catchError 00:36:34.947 [Pipeline] sh 00:36:35.229 + logger -p user.info -t JENKINS-CI 00:36:35.238 [Pipeline] } 00:36:35.252 [Pipeline] // stage 00:36:35.258 [Pipeline] } 00:36:35.272 [Pipeline] // node 00:36:35.277 [Pipeline] End of Pipeline 00:36:35.315 Finished: SUCCESS